T_Visionarium

T_Visionarium II

T_Visionarium II is an interactive immersive virtual environment that allows viewers to spatially navigate a televisual database and apply a recombinatory search matrix to create emergent narratives from the database’s network of digital streams. T_Visionarium II was premiered at Scientia, University of New South Wales, Sydney, 2006.

tvis2_b
T_Visionarium II in AVIE

T_Visionarium II exploits the latest advances in automated video analysis, multi-media search and retrieval and high-volume video streaming. T_Visionarium II is the first application to put into operation iCinema’s Advanced Visualization and Interaction Environment (AVIE), the world’s first stereoscopic panoramic projection system. AVIE is a benchmark immersive spatial data management installation that allows the viewer to navigate within a 360 degree surrounding and virtually infinite three dimensional space of audiovisual information.

tvis_screen_B
T_Visionarium II simulation

T_Visionarium II‘s database of digital television broadcasts contains over thirty hours of material recorded from fifty-six different programs aired on the five Australian channels. As in T_Visionarium I, a single uninterrupted camera take – referred to here as a ’shot’ or ‘clip’ – defines the indivisible unit of the database. The recorded material is automatically segmented using a cut detection algorithm to automatically detect camera cuts and fades. The result is a database of exactly twenty two thousand five hundred and seventy-one shots, with an overall average length of four and a half seconds per shot. Following segmentation, manual and automated analyses of the database has been undertaken. The automated analysis involves the application to each clip of state-of-the-art image and video feature extraction algorithms, which extract edge, colour, shape, texture and motion information. In parallel, each shot is tagged according to a special set of semantic and associative criteria derived from the innate qualities of the video data. These criteria are designed to embody the most useful framework for spontaneous linking of the segmented shots into new and unexpected relationships. A shot metric – an algorithm for measuring similarity between shots according to the extracted numerical features and manually applied semantic tags – has been developed with two important features; speed and parametric dimensional scaling. This allows almost instantaneous ranking and sorting of the entire database according to similarity with a clip, a necessity for real-time navigation of the database. The latter quality, parametric dimensional scaling, allows the viewer to interactively adjust the importance of the different dimensions when measuring similarity. So rather than the keyword based recombination used in T_Visionarium I, the method used in T_Visionarium II is associative, allowing for reorganisation of the database on affective terms in a cumulative manner.

tvis_screen_C
T_Visionarium II simulation

Within AVIE the viewer is able to see hundreds of the clips distributed in three dimensions over the surface of AVIE’s thirty meter long and three and a half meter high screen. A custom developed wireless tracking system allows the viewer to identify any one of the clips, whereupon the display system automatically clusters other video clips around it that have similar associative features as defined by the tagging strategy. Clips displayed at an increasing distance from the cluster exhibit fewer and fewer associative similarities to the viewer’s choice. In this way the viewer is free to navigate within the cluster of similarities and so assemble a unique sequence of video events that share certain identities, while at the same time triggering the rearrangement of that cluster as soon as they move to a different clip. Moving their attention to other clips at a greater distance generates completely new recombinations. The result is a completely dynamic system of narrative modules that are being continuously fine tuned and/or re-tuned as the viewer navigates the data space. In this process there is a continuous narrative reformulation that is, on the one hand, determined by the ordering of the tagging architecture, and on the other hand, completely free to reassemble in totally unexpected emergent sequences according to the individual paths of exploration that the viewers undertake.