Monday, October 17, 2011

System Flow

Here is a basic system diagram that outlines the flow of the project.


User input is collected through the Kinect sensor as raw data, which is then passed through a Kinect plugin for Unity. The game engine then processes the data and, depending on whether the user is creating, selecting, or controlling agents the game will update appropriately providing feedback to the user. Also notice that if the user is controlling the agents, the input would be further processed based on steering behavior and flow field implementation in order to produce the correct results if the user wants these types of interactions.

Sunday, October 16, 2011

More Changes

Hi,


Sorry I haven't been keeping this blog up-to-date recently. Some changes have occurred that will broaden the scope of my project and make it a little more interesting.  Instead of a game, I will now be working on a crowd dynamics authoring tool that will involve agent creation, selection, and control using the Kinect.  I really like this idea because it can be used for a variety of interactive activities, including the game I had originally planned to create.


The Break Down


Create:
Right now, one must left-click with the mouse in order to create new agents.  However, the idea is to only use hand motions for all interactions, so this needs to change.  My idea so far would be some kind of HUD (buttons, tabs, etc) that the user could hover over that would change to 'creation' mode (and 'selection/control' mode, etc).  It would be nice to avoid this though, but I'm still brainstorming ideas.


Select:
This is what I've been working on the most so far.  At the moment, the user's hands represent circles with a particular radius based on depth.  If an agent is within this circle, it is selected and follows the hand that selected it.  In the future, these aspects may change slightly in how they are implemented, but the principle will remain the same: Each hand is capable of making its own selections, from a single agent to a large group.


Control (the fun stuff):
As mentioned before, agents selected by a hand will follow that hand.  The hand represents a point in 2D space that the agent then sets as a goal and will walk to until deselected.  A lot more control functionality will hopefully be implemented as well, and this is where gesture interaction can get really interesting.  For example, one could provide agents with a suggested trajectory ('drawing' a path), give a desired velocity based on intensity of hand movement, and provide a direction of heading using simple directional motions (moving the right hand from the center to the right will cause selected agents to head right; this coupled with the previous idea would mean moving the hand from center to right faster would cause the agents to move faster).  Going further, there is also the possibility of creating vector fields.  These fields would influence the agents' velocities, and each hand could even create its own vector field for separate selected agents.


These techniques for selection and control could then be used to create specific interactions between agents and groups, such as the steering behaviors discussed by Craig Reynolds.  These include setting up pursue and evade, leader following, and instances of flocking and queuing.  Finally, using behavior trees, the user could set up a cascade of interactions between individuals or groups and set off a chain of events in the game world all based on intuitive hand motions.  This last part may have to be a future goal but is something I would very much like to get to by the end of the semester.


A final note is the option for two players.  It is something that I will keep in the back of my mind because of the many fun possibilities having two users creates.  I look forward to thinking about this idea more.


Anyway, that is all for now.  I had a system diagram, but it needs a few changes since my project has changed some.  It will be posted by tomorrow night.


Thanks,
Marley