国产精品

Scenario聽is a world-first 360-degree 3D cinematic installation whose narrative is interactively produced by the audience and humanoid characters imbued with artificial intelligence (AI). The title is a Commedia dell鈥橝rte term, referring to how the dramatic action depends on the way actors and audience interact.

Scenario聽is inspired by the experimental television work of Samuel Beckett. A female humanoid character has been imprisoned in a concealed basement, along with her four children by her father, who lives above ground with his daytime family. Set within this underground labyrinth, she and her children take the audience through various basement spaces in an attempt to discover the possible ways in which they and the audience can resolve the mystery of their imprisonment and so effect an escape before certain death. Watching them are a series of shadowy humanoid sentinels who track the family and the audience, physically trying to block their escape.

Scenario installation view

Rapidly interpreting and responding to audience behaviour by means of a sophisticated AI system, the humanoid sentinels work effortlessly to try and block the audience and the family at every turn. This two-fold dramatic action enables the work to create a narrative that evolves according to how the humanoids and the audience physically interact with each other. This is effected by means of a vision system that tracks the audience鈥檚 behaviour, linked to an AI system that allows the humanoids to independently interpret and respond to audience behaviour.

Background

Scenario聽investigates the differences in narrative reasoning between humanoids and human participants in interactive cinema. It proposes that when humanoids are provided with a modest ability to sense and interpret the actions of human participants sharing a digital cinematic environment, their interactive responses will co-evolve autonomously with those of human participants. In an experimental encounter with human participants, inspired by Samuel Beckett鈥檚 Quad 1 + 2, the study tests the narrative autonomy of humanoids as the capacity to make independent decisions using AI language.

Objectives

The project has three key research objectives:

  • Explain co-evolutionary narrative as the interaction between human participants and autonomous humanoids. Narrative autonomy of humanoids is defined as their capacity to act in reference to a vision tracking system that tracks the human participants, using an Artificial Intelligence (AI) system, in consultation with a knowledge database. The implementation of narrative autonomy by humanoids is twofold. Firstly, a humanoid is obliged to act on its own decision-making in response to meanings they ascribe to human behaviour. Secondly, a humanoid鈥檚 motivation can be expressed through their behavioural response.
  • Test outcomes of co-evolutionary interaction between human participants and humanoids in a cinematic experiment.聽Scenario聽involves the apprehension, interpretation and response by multiple humanoids to clusters of more than one human participant, thus focusing upon group interaction.
  • Evaluate the significance of co-evolutionary narrative as a condition of: differentiated clarity in sensing and tracking; autonomy recoverable in the deliberation of humanoids; quality of aesthetic conviction in the co-evolutionary responses of humanoids and human participants.

Methodology

Scenario聽undertakes a narrative experiment within a framework that integrates independently proven experimental systems so as to test the hypothesis that co-evolutionary narrative can aesthetically demonstrate levels of autonomy in humanoid characters.

The materials are integrated from established interactive technologies. First among these is iCinema鈥檚 world-first 360 degree 3D cinematic theatre AVIE3, which enables the engagement of an experimental interactive framework. This framework incorporates digital sensing, interpretive and responsive systems. The narrative experiment is designed to dramatise distinct behavioral processes, thus probing humanoid autonomy and the cognitive gap between humanoid and human participant. The design is sufficient to provide humanoids with minimal perceptive, reasoning and expressive capabilities that allow them to track, deliberate and react to human participants, with an autonomy and deliberation characteristic of co-evolutionary narrative.

The framework is structured so as to respect autonomous humanoid intentionality, as opposed to the simulated intentionality of conventional digital games. While narrative reasoning in human-centred interactivity focuses exclusively on human judgments, co-evolutionary narrative allows for deliberated action by humanoids. This involves providing these characters with a number of capacities beyond their rudimentary pre-scripted behaviour: First, the ability to sense the behaviour of participants; second, the facility to represent this behaviour symbolically; and third, the capacity to deliberate on their own behaviour and respond intelligibly.

The humanoids define their autonomy experimentally through their ability to deliberate within a performance context inspired by the experimental film and television work of Samuel Beckett. In this 鈥淏eckettian鈥 performance, individual and group autonomy is determined by physical interactive behaviour, whereby characters define themselves and each other through their reciprocal actions in space. The reciprocal exchange of behaviour in this context is sufficiently elastic to allow for the expression of creative autonomy. As聽Scenario聽focuses on human and humanoid clustering, its experiments examine the relation between groups of humanoids and groups of humans.

Beckett鈥檚 research is extended because it provides an aesthetic definition of group autonomy as other-intentional, that is, predicated on shared actions between groups of human participants. In Beckett鈥檚 Quad, for example, characters mutually define each other by means of their respective territorial manoeuvres as they move backwards and forwards across the boundaries of a quadrant. Quad is drawn on as a way of aesthetically conceptualising the relationship between spatialisation and group consciousness. For example, in one scene, participants are confronted with several humanoids, who cluster themselves into groups in order to block the ability of the human participants to effectively negotiate their way through the space. The better the humanoids can work as a group, the more effective is their blocking activity.

The type of interaction generates a cascading series of gestural and clustering behaviours, testing and evaluating the network of meaningful decisions by humanoids and human participants as they attempt to make sense of each other鈥檚 behaviour.

The digital world of聽Scenario聽has a number of technical features:

AI System

The AI system is based on a variant of a symbolic logic planner drawn from the cognitive robotics language Golog developed at the University of Toronto, capable of dealing with sensors and external actions. Animations that can be performed by a humanoid character are considered actions that need to be modelled and controlled (e.g., walking to a location, pushing a character, etc.). Each action is modelled in terms of the conditions under which it can be performed (e.g., you can push a character if you are located next to it) and how it affects the environment when the action is performed. Using this modelling, the AI system plans or coordinates the actions (i.e., animations) of the humanoid characters by reasoning about the most appropriate course of action.

AI Interface

A networked, multi-threaded interface interacts with an external Artificial Intelligence system that can accept queries of the digital world state (e.g. character positions, events occurring) and can be used to control humanoid characters and trigger events in the humanoid world. Currently, a cognitive robotics language based on Golog is used, but the AI Interface has been developed in a modular fashion that allows for any other language to be plugged in instead.

Real-Time Tracking and Localisation System

The tracking system uses advanced technologies in Computer Vision and Artificial Intelligence to identify and locate persons in space and time. A total of 16 cameras are used to identify individuals as they enter the AVIE3 environment, and maintain identities as they move around. Spatial coherence is exploited, with overhead cameras providing a clear view but without height information, and oblique cameras providing better location information. A distributed architecture over sophisticated network hardware, along with software techniques, provides real-time results. Between camera updates, innovative prediction techniques are used to maintain accurate person positions at incredibly fast rates. The tracking system is also very robust in the presence of multiple people. A real-time voxel reconstruction of every individual within the environment leverages the tracking system to considerably speed up the construction of the 3D model. A head and fingertips recognition and tracking system allows users to interact with the immersive environment through the use of pointing gestures.

Animation Interface

A custom software toolset iC AI functions as a virtual laboratory for constructing humanoid characters to be used in narrative scenarios. It enables characters to appear within the聽AVIE聽space to exhibit a high level of visual quality, with realistic human-like animations. This includes the ability to instruct characters at a higher programmatic level (walk to point, looking at objects, turning, employing inverse kinematics, rather than individual joint commands), and the ability to schedule these behaviors to produce believable, fluid characters.

Mixed Reality System

A custom 3D behaviour toolset聽AVIE-MR allows the creation of 鈥榮cenarios鈥 that exhibit a cycle of cause and effect between the real world and the digital world. Its principal feature is that it allows the cognitive robotic language to implement realistic behaviour in the humanoid characters and a minimum of programming effort. This ensures enhanced levels of human interactive and immersive experience.

础搁颁听滨苍惫别蝉迟颈驳补迟辞谤蝉:听Dennis Del Favero, Jeffrey Shaw, Steve Benford, Johannes Goebel
Programmers: Adrian Hardjono, Jared Berghold, Som Guan, Alex Kupstov. Piyush Bedi, Rob Lawther
Project Funding:听ARC聽DP0556659
2011-2015

  • Jeffrey Shaw & Hu Jieming Twofold Exhibition, Chronus Art Centre, Shanghai, 2014/15
  • Child, Nation & World Cinema Symposium,聽国产精品, Sydney, 2014
  • ISEA13,聽国产精品, Sydney, 2013
  • Sydney Film Festival, Sydney, 2011
  • 15th Biennal Film & History Conference,聽国产精品, Sydney, 2010

Books

Ed Scheer. (2011).聽Scenario-The Atmosphere Engine.聽鲍狈厂奥听Press and聽ZKM: Sydney and Karlsruhe.

Journal articles

Neil C.M. Brown, Timothy Barker and Dennis Del Favero. (2011). 鈥淧erforming Digital Aesthetics: The Framework for a Theory of the Formation of Interactive Narratives鈥,聽Leonardo聽44(3) (forthcoming 鈥 accepted July 2010).

A.Sridhar and A.Sowmya. (2011). 鈥淒istributed, Multi-Sensor Tracking of Multiple Participants within Immersive Environments using a 2-Cohort Camera Setup鈥,聽Machine Vision and Applications聽(Forthcoming 鈥 accepted August 2010).

A.Sridhar and A.Sowmya. (2008). 鈥淢ultiple Camera, Multiple Person Tracking with Pointing Gesture Recognition in Immersive Environments鈥, Lecture Notes in聽Computer Science聽5358(I), G. Bebis et al. (Eds.), Berlin: Springer Verlag: 508-519.

Conference papers

Dennis Del Favero and Timothy Barker. (2010). 鈥淪cenario: Co-Evolution, Shared Autonomy and Mixed Reality鈥, Proceedings of聽IEEE聽International Symposium on Mixed and Augmented Reality聽(ISMAR2010), Seoul, 13-16 October.

Timothy Barker. (2010). 鈥淚nteractive Aesthetics: iCinema, Interactive Narratives and Immersive Environments鈥,聽15th Biennial Conference of The Film and History Association of Australia and New Zealand, Sydney, 30 November 鈥 3 December.

Maurice Pagnucco. (2010). 鈥淲hat is Artificial Intelligence in Scenario?鈥,聽15th Biennial Conference of The Film and History Association of Australia and New Zealand, Sydney, 30 November 鈥 3 December.

Laura Aymerich. (2010). 鈥淩espuesta emocional de los participantes en un juego interactivo en un ambiente virtual鈥,聽II Congreso Internacional AE-IC M谩laga 2010 Comunicaci贸n y desarrollo en la era digital聽(II Interational Congress, AE-IC), Malaga, Spain, 3-5 February.

A.Sridhar, A.Sowmya and P.Compton. (2010). 鈥淥n-line, Incremental Learning for Real-Time Vision Based Movement Recognition鈥,聽IEEE聽9th International Conference on Machine Learning and Applications聽ICMLA聽2010, Washington, DC,聽USA, 12-14 December.

A.Sridhar and A.Sowmya. (2009). 鈥淪parseSPOT: Using A Priori 3-D Tracking for Real-Time Multi-Person Voxel Reconstruction鈥, Proceedings of the聽16th聽ACM聽Symposium on Virtual Reality Software and Technology聽(VRST聽2009), S. N. Spencer (Ed.), Kyoto, Japan, November, 135-138.

Laura Aymerich. (2009). 鈥淚dentification with an Animated Object and its Relationship to Emotions in a Virtual Environment鈥,聽Entertainment = Emotion Conference, Benasque, Spain, 15-21 November.

Volker Kuchelmeister, Dennis Del Favero, Ardrian Hardjono, Jeffrey Shaw and Matthew McGinity. (2009). 鈥淚mmersive Mixed Media Augmented Reality Applications and Technology鈥, Advances in Multimedia Information Processing,聽10th Pacific Rim Conference on Multimedia, Paisarn Muneesawang, Feng Wu, Itsuo Kumazawa, Athikom Roesaburt, Mark Liao, Xiaoou Tang (Eds.), Bangkok Thailand, 15-18 December, 112-118.

Volker Kuchelmeister. (2009). 鈥淯niversal Capture through Stereographic Multi-Perspective Recording and Scene Reconstruction鈥,Advances in Multimedia Information Processing,聽10th Pacific Rim Conference on Multimedia, Paisarn Muneesawang, Feng Wu, Itsuo Kumazawa, Athikom Roesaburt, Mark Liao, Xiaoou Tang (Eds.), Bangkok Thailand, 15-18 December, 974-981.

Tim Str枚der and Maurice Pagnucco, (2009). 鈥淩ealising Deterministic Behavior from Multiple Non-Deterministic Behaviors鈥, Proceedings of the聽Twenty First International Joint Conference on Artificial Intelligence聽(IJCAI鈥09), Pasedena,聽USA, 11-17 July, 936 鈥 941.

Dennis Del Favero, Neil Brown, Jeffrey Shaw and Peter Weibel. (2007). 鈥淓xperimental Aesthetics and Interactive Narrative鈥,聽ACUADS聽Conference Report, Sydney, University of New South Wales.

Matthew McGinity, Jeffrey Shaw, Dennis Del Favero and Volker Kuchelmeister. (2007). 鈥淎VIE: A Versatile Multi-User Stereo 360-Degree Interactive VR Theatre鈥, The聽34th International Conference on Computer Graphics and Interactive Techniques,聽San Diego, 5-9 August.

Director:聽Dennis Del Favero

Writer:聽Stephen Sewell

Artificial Intelligence System:聽Maurice Pagnucco, Timothy Cerexhe

Real-Time Computer Vision System and Interpretation System:聽Anuraag Sridhar, Arcot Sowmya, Paul Compton

Composer:聽Kate Moore

Designer:聽Karla Urizar

Lead Technical Architect:聽Ardrian Hardjono

Software Engineers:聽Jared Berghold, Som Guan, Alex Kupstov. Piyush Bedi, Rob Lawther

Hardware Integration Engineer:聽Robin Chow

Pianist:聽Saskia Lankhoorn (playing Zomer)

Sound Engineer:聽Marc Chee

Animation Modellor:聽Alison Bond

Post-Doctoral Fellow:聽Tim Barker

Motion Capture Actors:聽Dianne Reid, Rebekkah Connors, Alethia Sewell, Stephanie Hutchison

Body Models:聽Corrie Morton, Sylvia Lam, Jennifer Munroe, Taylor Callaghan, Zachary Collie

Motion Capture:聽MoCap Studios Deakin University

Voice-over Actors:聽Noel Hodda, Steve Bisley, Heather Mitchell, Katrina Foster, Marcella and Justine Kerrigan, Bonni Sven, Chaquira Cussack

Production Managers:聽Sue Midgley, Joann Bowers

Australian Research Council Queen Elizabeth II Fellow:聽Dennis Del Favero

Australian Research Council Discovery Project Investigators:聽Dennis Del Favero, Jeffrey Shaw, Steve Benford, Johannes Goebel

Scenario is an experimental study for a research project supported under the Australian Research Council鈥檚 Fellowship and Discovery Projects funding schemes.