We present a data-driven method for the real-time synthesis of believable steering behaviours for virtual crowds. The proposed method interlinks the input examples into a structure we call the perception-action graph (PAG) which can be used at run-time to efficiently synthesize believable virtual crowds. A virtual character's state is encoded using a temporal representation, the Temporal Perception Pattern (TPP). The graph nodes store groups of similar TPPs whereas edges connecting the nodes store actions (trajectories) that were partially responsible for the transformation between the TPPs. The proposed method is being tested on various scenarios using different input data and compared against a nearest neighbours approach which is commonly employed in other data-driven crowd simulation systems. The results show up to an order of magnitude speed-up with similar or better simulation quality.
If you can't find a tool you're looking for, please click the link at the top of the page to "Go to old article view". Alternatively, view our Knowledge Base articles for additional help. Your feedback is important to us, so please let us know if you have comments or ideas for improvement.