The metaphor introduced in this paper describes a more natural, human-like way to interact with objects in virtual environments. The interaction is controlled by motion-based gestures, which are recognized using the method described in [2].

In contrast to the application of artificial and unnatural tools, natural input devices (e.g. the human hand) and human-like expression methods (e.g. gestures) can be used to interact with the objects; but on cost of precision. Pre-selection and artificial assistance (like displayed handles, etc.) become unnecessary or might be used in addition. Possible application scenarios are imaginable wherever an exact alternation of the objects is secondary (e.g. presentation, brain-storming, tactical/strategic planning, etc.), or if more precise methods are used in combination.

Three problems have to be addressed : First, the definition of natural, human-like gestures; second, the recognition of these gestures; and, finally, the association with appropriate interactions.

We focus on finding human-like, natural gestures (classified as act gestures by [7]) which, in contrast to predefined symbolic commands (e.g. emblems, used in GIVEN [3]), don’t have to be learned by the user, and combine them with appropriate actions to interact with the objects. This augments ideas expressed by [4,9].