Wednesday, December 7, 2011

Now and Then

My project has changed considerably from the start of the semester.  It started out as an idea for a game, but has become much more generalized yet also more focused at the same time.  Instead of having to worry about creating the assets and artwork, I have been able to instead work on the functionality, which when complete, will provide a base package that could be used for a variety of interactive applications (read games).  I hope that I could even go on to create the very game I had originally planned and possibly more games using this project for the base functionality.

Tuesday, December 6, 2011

Next Steps

So, the Beta review was good and bad.
The bad news is that, the demo went slightly awry.  A feature I had been working on that week didn't work properly... The good news, however, is that I got a lot of nice feedback and am ready to take the next steps to get this project completed.

The top priority for this week is to correct whatever went wrong with the demo and set up the constraints for giving directional headings to the agents.  Then I plan to start setting up the framework for creating the vector flow fields.  This shouldn't be too hard since determining velocity changes is already required for the directional headings; it's just a matter of storing the vectors in a grid and having the agents influenced by the stored vectors.

I want to have all of the above completed by the end of this week, and then by the middle of next week,  I'd like to have the vector fields working smoothly.  I will then have the final week to add a couple of extra features, rework the paper that goes with this project, and to create the final video presentation.

Marley

Tuesday, November 15, 2011

Progress and Control

Hi all,

Unfortunately, I wasn't able to work on my project very much last week... but this week it's time to get back in action.

I was able to get the tabs to change between modes working.  They need a bit of tweaking, but for now, they do the job nicely.

Hovering over the top tab changes the mode to Create.
When in creation mode, agents are created at the hand points after a small delay.  This gives the user time to decide where to place the next two agents.  Of course, this isn't the best solution.  I really would like to get hand depth involved here in order to give the user better control over when and where to create agents, as mentioned in an earlier post.

Hovering over the middle tab changes the mode to Select.
Here you can select agents and have them follow each hand.  Deselection still works by 'clapping', but I want to try the 'shake off' idea suggested to me during alpha reviews.

Hovering over the bottom tab changes the mode to Control.
Right now this does nothing.. very sad.. but I want this to be my next major focus for beta reviews.  I'm caught between jumping right into flow fields and starting with something a little bit less complex and then building up.  My idea right now is to start with giving selected agents directional headings.  This requires being able to create vectors from hand motions and applying those vectors to the agents.  With this more basic system in place, I could then build on that to generate vectors over a grid to create the flow field.  Then the agents would update their current velocities based on the flow vectors in their predicted grid position.

That's all for now,
Thanks!

Marley

Thursday, November 3, 2011

Alpha Review Thoughts

Hello,
I have received the feedback from the first review of my project.  I'm happy to say it was very positive and included a couple of nice suggestions and things to keep in mind.

One critique was in reference to the clapping gesture that deselected the agents.  In truth, the user does not actually have to clap, only bring their two hands together.  I do however like the shake idea they provided and may implement that in addition to the original method for deselecting in order to compare and determine which is more intuitive.

Something else brought up was teaching the user how to use this system. It's definitely in the back of my mind.  I would like to just have simple visuals (text and/or demonstrations) that appear for each option as the user explores them the first time the system is started.  After that the visuals would only display after a certain amount of time of idleness as a kind of prompt or reminder.

In regards to the progress I've made so far this week, the user can now select the create tab and agents will appear at the two hand positions.  This implementation is very basic at the moment; the agents appear at each position after a two second delay.  The user can then choose the select tab and select the agents like before.

My next goals are to figure out a good way for allowing the user more control over when and where an agent is created, cleaning up the switch between modes, and to begin thinking about the other control schemes.  I've been brainstorming about vector fields and how to begin implementing them as a starting point.

Thanks,
Marley

Wednesday, November 2, 2011

Changing Modes

Hi all,

This week I'm working on a simple GUI for switching between the creation, selection, and control modes. I haven't come up with the best solution, but it should work and that's the most important thing at this time. Being able to switch modes will allow me to implement and test the features of one mode without interference from the others (such as trying to create agents but accidentally selecting newly created ones).

Simple GUI for changing between creation, selection, and control modes.
In this example, the right hand has selected create mode and the left hand is creating new agents.
Three tabs will be labeled for each mode.  Moving a hand point near these tabs will cause the closest tab to move out, and perhaps change color. This reaction both signals the user and hopefully makes it easier for the user to then select that tab if they want to.  By hovering over the tab for some amount of time, the tab is selected and the mode is changed.

Creation
When in creation mode, the hands can move around the scene and create agents.  One idea is that moving a hand forward in the depth plane will 'plant' the agent, thus creating one at that point.  This way agents aren't being created in every point in space where the hand was located and also gives the user some control, unlike  an alternative idea which would be to simply delay when agents are created.

Selection
This is the mode I've been working on the most.  Here both hands can select separate groups of agents and then each group follows that respective hand.  I think I will leave this simple control aspect in the selection mode for now.

Control
The control mode will probably have a few of its own tabs each representing a different control scheme.  Select groups in selection mode and then go to control mode.  The selections will then be locked and the user can set up different control schemes by first selecting the scheme to use and then using the appropriate gesture.


I would like to not have to rely on selecting tabs... but for now, like mentioned before, it would be the easiest way to implement all of the different aspects and test them separately.  I'm definitely open to any ideas about ways around this.  Another issue I can already foresee, is selecting agents and then moving the hand to the tabs.  If you want to select a specific agent in the middle of a large group, how can the user tell the system that it only wants that one and not pick up other agents on the way to that agent and on the way back to the tabs?  I guess one way would be to have two selection schemes.  One where both hands can freely move about and select agents, and another where both hands work in tandem to inhibit selection until the desired agent is reached.

Anyway, that's all for now.
Once the different modes are set up I will begin working on some of the control schemes.  Leader following, simple directional headings, and maybe flow fields are ones that I would like to start with.  There will be more about that in future posts.

Thank you,
Marley

Monday, October 17, 2011

System Flow

Here is a basic system diagram that outlines the flow of the project.


User input is collected through the Kinect sensor as raw data, which is then passed through a Kinect plugin for Unity. The game engine then processes the data and, depending on whether the user is creating, selecting, or controlling agents the game will update appropriately providing feedback to the user. Also notice that if the user is controlling the agents, the input would be further processed based on steering behavior and flow field implementation in order to produce the correct results if the user wants these types of interactions.

Sunday, October 16, 2011

More Changes

Hi,


Sorry I haven't been keeping this blog up-to-date recently. Some changes have occurred that will broaden the scope of my project and make it a little more interesting.  Instead of a game, I will now be working on a crowd dynamics authoring tool that will involve agent creation, selection, and control using the Kinect.  I really like this idea because it can be used for a variety of interactive activities, including the game I had originally planned to create.


The Break Down


Create:
Right now, one must left-click with the mouse in order to create new agents.  However, the idea is to only use hand motions for all interactions, so this needs to change.  My idea so far would be some kind of HUD (buttons, tabs, etc) that the user could hover over that would change to 'creation' mode (and 'selection/control' mode, etc).  It would be nice to avoid this though, but I'm still brainstorming ideas.


Select:
This is what I've been working on the most so far.  At the moment, the user's hands represent circles with a particular radius based on depth.  If an agent is within this circle, it is selected and follows the hand that selected it.  In the future, these aspects may change slightly in how they are implemented, but the principle will remain the same: Each hand is capable of making its own selections, from a single agent to a large group.


Control (the fun stuff):
As mentioned before, agents selected by a hand will follow that hand.  The hand represents a point in 2D space that the agent then sets as a goal and will walk to until deselected.  A lot more control functionality will hopefully be implemented as well, and this is where gesture interaction can get really interesting.  For example, one could provide agents with a suggested trajectory ('drawing' a path), give a desired velocity based on intensity of hand movement, and provide a direction of heading using simple directional motions (moving the right hand from the center to the right will cause selected agents to head right; this coupled with the previous idea would mean moving the hand from center to right faster would cause the agents to move faster).  Going further, there is also the possibility of creating vector fields.  These fields would influence the agents' velocities, and each hand could even create its own vector field for separate selected agents.


These techniques for selection and control could then be used to create specific interactions between agents and groups, such as the steering behaviors discussed by Craig Reynolds.  These include setting up pursue and evade, leader following, and instances of flocking and queuing.  Finally, using behavior trees, the user could set up a cascade of interactions between individuals or groups and set off a chain of events in the game world all based on intuitive hand motions.  This last part may have to be a future goal but is something I would very much like to get to by the end of the semester.


A final note is the option for two players.  It is something that I will keep in the back of my mind because of the many fun possibilities having two users creates.  I look forward to thinking about this idea more.


Anyway, that is all for now.  I had a system diagram, but it needs a few changes since my project has changed some.  It will be posted by tomorrow night.


Thanks,
Marley

Friday, September 30, 2011

Changes

So that part in my last post about gestures? Scratch that.

I'm going the way of interactive vector fields. Craig Reynolds has done some really cool work in the field of autonomous agents.  Check out his site here  http://www.red3d.com/cwr/

In particular, I'm looking at this paper which highlights several ideas behind modeling steering behavior.  For the game, I plan to have the player's movements influence a vector field set in a 2D grid.  The agents will then use the field to direct their steering behavior, as seen in this applet from Reynolds' site: http://www.red3d.com/cwr/steer/FlowFollow.html

For now, however, I'm working on ways to select a subset of agents using mouse movements.  A really nice navigation framework was given to me, and it allows me to focus on the key interactivity of the game, once these smaller details are squared away.  Once selection is worked out, I will move to getting the Kinect to work with the nav framework through the use of a nice package from here that was further modified by Raul and Frank, grad students here at Penn.

Next time I'll have a more detailed post as well as a review over what I've done so far.

Marley

Thursday, September 22, 2011

Design Elements

Hello again,

This post will hopefully begin to address some of the key aspects of the game's design.

Let's start with the general layout of the levels and what's in them.

Each level is a simple path from start to finish. The camera will be at an angled overhead view allowing the player to see the entire course while maintaining visual and interactive depth.  

The flags will act as safe zones where the agents (called shuffles) are safe from harm. An activated flag will turn the safe zone into a spawn point, so if the player were to lose all their shuffles, they would re-spawn at the last safe zone that was activated.

In order to activate a flag, the player moves their hand over the flag until a green ring appears.  This means the player's hand is positioned correctly. After a certain amount of time the ring will indicate that the safe zone has been activated (either by flashing, blinking, or being filled in a clockwise fashion, the same as selecting in the Kinect Hub).

Next are 3 challenges that will put the gesture controls to the test.

Shuffles cannot climb this incline when merely walking.  However, with enough momentum, they can propel themselves up the incline and over the peak.  Make sure your shuffles get a good running start!

A winter wonderland! Shuffles like to skate around on ice (in fact, they can't help it) unless you direct them to walk slowly.  Ice can be friend or foe.  Often times, deadly objects, such as wall spikes or bramble thorn bushes are located right after an ice patch! (Now who would do that?)  But maybe its ability to accelerate shuffles will come in handy...

Let sleeping dogs lie.  Tip toe past the wuffles or your shuffles may become a tasty snack!
*Note: This may test the Kinect's sensitivity level when receiving data or my ability to handle very small or very slow gestures correctly.

And finally I'd like to introduce shuffles and wuffles!
Of course the designs might change in the future, but I hope that I can implement them in the final version of the game.

Here are some other important notes:

The first level will start out with somewhere around five to ten shuffles and the only challenges would be incline or ice with no chance at being damaged.  This should make it easier for the player to get a feel for the game and it's controls.  I think maybe twenty to thirty shuffles would be the max in the game, although I'm curious what controlling up to fifty or more shuffles might be like, hehe.
For damage, I was thinking of treating the entire herd as one entity that had a certain amount of hit points.  Then, if hit points are lost, a certain number of shuffles die ( :c ) based on the number of hit points lost.  That should be a lot easier to deal with than if each shuffle had its own hit points.  But I'm not really sure on that.

And finally I'd like to start talking about the gestures by first introducing one.  In a future post, I will document all of the gestures needed for the game with descriptions and video or animation of some form demonstrating each one.

The Wave is the gesture used to get your shuffles movin'! Wave right to left to tell them to move left, left to right to tell them to move right, toward yourself to make them move down, and away from you to make them move up.  Gesture intensity will determine how quickly or slowly the shuffles move.  To make them run, use a faster, more forceful wave, etc.  

Thanks for reading!
Questions and comments are much appreciated. :)
Marley


Monday, September 19, 2011

The Game Plan: Outline

Heyo!

So here I'm going to give a very basic outline of what needs to be done over the coming months.  I will follow up this outline with more detailed posts about the game design and mechanics.

1 Start with the Basics
Read through and learn the Kinect SDK and how to navigate/use Unity 3.  The goal is to get Unity to display the hands, wrists, elbows, shoulders, and shoulder center as points on the screen based on skeletal data from the Kinect.  These are the only joints that I think will be needed in order to play the game. (Note, however, the joints will not be displayed on screen during game play.)

2 Moving Up
Come up with the game gestures needed for game play.  The goal is to create a simple 'copycat' game where the player is shown a gesture and then asked to repeat, if the gesture is correct (within a certain percentage), then the game accepts the input and lets the player know they were right.  If the gesture is not accepted, the game should ask the player to try again.  This should be a good way to test whether the chosen gestures are easy to do and how strict the game should be when accepting input.

3 The Herd
Get some basic group agents set up using behavior trees so that the player can start interacting with the them.  The goal is to have a very, very stripped down version of the final game up and running.  This will act as a starting point for determining gesture 'force'.  One of the overall concepts of this project is to allow a single type of gesture to also include a range of intensity, which then maps to a range of reaction intensity in the agents.   So for example, if the player wants the agents to walk right, they will will wave from left to right, but if they want them to walk faster or run, they will wave from left to right faster.

Which brings me to a side note:
     I know it's called Herd 'Em, but this game is more about the agents looking to the player for direction, instead of the agents wanting to avoid an object, such as the player's hands.  I think that using avoidance techniques would possibly be too challenging for the player in terms of this game.   To compare, sheep herding dogs are born with the characteristics needed to successfully control sheep.  These traits are then further refined through months to years of training so that they can be used reliably and effectively.  Also, there is much more going on between sheep and dog than simply the sheep wanting to avoid the dog.  The agents in the game cannot determine the intent of the player; there is no eye contact, stalking, or other essential body language communication which is used between dogs and sheep.
     I really want to focus more on the range of intensity (I used urgency before, but I like intensity better) that can be found in each gesture.  I believe this approach makes much more sense to implement than a strict sheep herding model.  Perhaps I should name this, Direct 'Em? xD

4 A Smarter Herd
Improve upon the group dynamics and work on gesture refinement (if necessary, which is likely).  The goal here is to make the agents more intelligent about how to react during certain situations and to hopefully had some simple obstacles to the scene to test these reactions.

5 Level Layout
Develop the levels, including types of obstacles, dangers, length and other design elements.  The goal is to create at least 2 levels--the first being more of an introduction level and the second being a little more challenging--with rough implementations of the obstacles in place, and to actually be able to play through them!

6 Making Things Look Pretty
Art! Work on the overall style for the game and the design for the agents, menus, decorations, obstacles and anything else necessary.  The goal here is to enhance the game experience with fun visuals and, hopefully, sound.

7 Last But Not Least
Improve upon anything that needs work and/or add additional details to the game mechanics or design.  The goal, make it better!

Final note, time permitting I'd like to include ways to give each agent or a subset of the agents certain characteristics.  This would be a way to make the game more challenging and the agents more engaging.  Also I'd like to point out that, while art and design are incredibly important to any game,  the core part of the game mechanics (gesture input and agent reactions) have top priority during this project.


Thanks for reading!
Marley

Monday, September 12, 2011

The Journey Begins

Hello all!  This semester I will be working on my senior design project and I'm very excited to get started.  Here is the abstract for my project, which should be a good starting point for understanding what it's about:

The Kinect has opened up great opportunities to develop new ways of interacting with virtual games.  I will use Microsoft's KinectSDK to create a 3D game where the player herds a group of agents around obstacles and through other dangers to reach the goal with as many surviving agents as possible.  This game will not only utilize gesture input to direct the agents, but will also take into account the urgency with which the player gestures.  If a player wants the agents to stop suddenly they can direct the agents with a faster, more forceful stop gesture, etc.  Another notable feature is that there will not be a character in the game that represents the player; the player will interact directly with the agents, and vice versa.  Lastly, time permitting, I hope to include parameters that will give each agent certain characteristics that can help or hinder the player, such as willingness to follow directions and being more easily distracted.  These elements would help to make the game more challenging and more satisfying upon successfully reaching the goal.

Marley