Please use this identifier to cite or link to this item:
|Title:||Audio-visual annotation graphs for guiding lens-based scene exploration||Authors:||Ahsan, Moonisa
|Keywords:||Interactive visualization lenses; Annotations; User interfaces; Interactive exploration; GuidanceGuided tour||Issue Date:||2022||Publisher:||Elsevier||Project:||Advanced Visual and Geometric Computing for 3D Capture, Display, and Fabrication
|Journal:||Computers & Graphics||Abstract:||
We introduce a novel approach for guiding users in the exploration of annotated 2D models using interactive visualization lenses. Information on the interesting areas of the model is encoded in an annotation graph generated at authoring time. Each graph node contains an annotation, in the form of a visual and audio markup of the area of interest, as well as the optimal lens parameters that should be used to explore the annotated area and a scalar representing the annotation importance. Directed graph edges are used, instead, to represent preferred ordering relations in the presentation of annotations, by having each node point to the set of nodes that should be seen before presenting its associated annotation. A scalar associated to each edge determines the strength of this constraint. At run-time, users explore the scene with the lens, and the graph is exploited to select the annotations that have to be presented at a given time. The selection is based on the current view and lens parameters, the graph content and structure, and the navigation history. The best annotation under the lens is presented by playing the associated audio clip and showing the visual markup in overlay. When the user releases control, requests guidance, opts for automatic touring, or when no available annotations are under the lens, the system guides the user towards the next best annotation using glyphs, and potentially moves the lens towards it if the user remains inactive. This approach supports the seamless blending of an automatic tour of the data with interactive lens-based exploration. The approach is tested and discussed in the context of the exploration of multi-layer relightable models.
|Appears in Collections:||CRS4 publications|
Show full item record
Files in This Item:
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.