Menu
Publications UAS Grisons
Overview

Overview

Enter a search term or use the advanced search function to filter your search results according to the author, year of publication or document type.

 

Publications

Publications

  • Open advanced search

  • Staudt, Yves; Keller, Thomas; Rölke, Heiko; Burch, Michael (2022) : Pi - Das Wunder dieser Zahl In: Forster, Michael; Alt, Sharon; Hanselmann, Marcel; Deflorin, Patricia (Hg.): Digitale Transformation an der Fachhochschule Graubünden: Case Studies aus Forschung und Lehre: Chur: FH Graubünden Verlag, S. 111-118. Available online at https://www.fhgr.ch/fh-graubuenden/ueber-die-fh-graubuenden/wofuer-stehen-wir/digitalisierung/digitalisierungswissen-fuer-graubuenden/#c15147, last checked on 20.01.2023

     

    Abstract: Das Ziel in diesem Projekt war es, sich mit der praktischen Umsetzung von High Performance Computing und der Visualisierung von grossen Datenmengen vertraut zu machen. Zu diesem Zweck hat das Team die Zahl Pi auf eine Präzision von 62.8 Billionen Stellen berechnet. In einem weiteren Schritt haben wir in dieser Zahl nach Mustern gesucht, welche wir danach visualisiert haben.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Bakardzhiev, Hristo; van der Burgt, Marloes; Martins, Edoardo; van den Dool, Bart; Jansen, Chyara, van Scheppingen, David; Wallner, Günter; Burch, Michael (2021) : A Web-Based Eye Tracking Data Visualization Tool In: Del Bimbo, Alberto; Cucchiara, Rita; Sclaroff, Stan; Farinella, Giovanni Maria; Mei, Tao; Bertini, Marco; Escalante, Hugo Jair; Vezzani, Roberto (Hg.): Pattern Recognition: ICPR International Workshops and Challenges: Proceedings. Part III: International Conference on Pattern Recognition (ICPR): Online, 10. - 15. Januar: Cham: Springer (Lecture Notes in Computer Science), S. 405-419

    DOI: https://doi.org/10.1007/978-3-030-68796-0_29 

    Abstract: Visualizing eye tracking data can provide insights in many research fields. However, visualizing such data efficiently and cost-effectively is challenging without well-designed tools. Easily accessible web-based approaches equipped with intuitive and interactive visualizations offer to be a promising solution. Many of such tools already exist, however, they mostly use one specific visualization technique. In this paper, we describe a web application which uses a combination of different visualization methods for eye tracking data. The visualization techniques are interactively linked to provide several perspectives on the eye tracking data. We conclude the paper by discussing challenges, limitations, and future work.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael; Huang, Weidong; Wakefield, Mathew; Purchase, Helen C.; Weiskopf, Daniel; Hua, Jie (2021): The State of the Art in Empirical User Evaluation of Graph Visualizations. In: IEEE Access 9, S. 4173-4198. Available online at https://doi.org/10.1109/ACCESS.2020.3047616, last checked on 03.09.2021

     

    Abstract: While graph drawing focuses more on the aesthetic representation of node-link diagrams, graph visualization takes into account other visual metaphors making them useful for graph exploration tasks in information visualization and visual analytics. Although there are aesthetic graph drawing criteria that describe how a graph should be presented to make it faster and more reliably explorable, many controlled and uncontrolled empirical user studies flourished over the past years. The goal of them is to uncover how well the human user performs graph-specific tasks, in many cases compared to previously designed graph visualizations. Due to the fact that many parameters in a graph dataset as well as the visual representation of them might be varied and many user studies have been conducted in this space, a state-of-the-art survey is needed to understand evaluation results and findings to inform the future design, research, and application of graph visualizations. In this article, we classify the present literature on the topmost level into graph interpretation, graph memorability, and graph creation where the users with their tasks stand in focus of the evaluation, not the computational aspects. As another outcome of this work, we identify the white spots in this field and sketch ideas for future research directions.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael; Wallner, Günter; Broeks, Nick; Piree, Lulof; Boonstra, Nynke (2021) : The Power of Linked Eye Movement Data Visualizations In: Bulling, Andreas; Huckauf, Anke; Gellersen, Hans; Weiskopf, Daniel; Bace, Mihai; Hirzle, Teresa; Alt, Florian; Pfeiffer, Thies; Bednarik, Roman; Krejtz, Krzysztof; Blascheck, Tanja; Burch, Michael; Kiefer, Peter; Dodd, Michael D.; Sharif, Bonita (Hg.): Symposium on Eye Tracking Research and Applications: Full Papers: ETRA 2021: Online, 25. - 27. Mai: New York: Association for Computing Machinery (ACM), S. 3:1-3:11. Available online at https://doi.org/10.1145/3448017.3457377, last checked on 03.09.2021

     

    Abstract: In this paper we showcase several eye movement data visualizations and how they can be interactively linked to design a flexible visualization tool for eye movement data. The aim of this project is to create a user-friendly and easy accessible tool to interpret visual attention patterns and to facilitate data analysis for eye movement data. Hence, to increase accessibility and usability we provide a web-based solution. Users can upload their own eye movement data set and inspect it from several perspectives simultaneously. Insights can be shared and collaboratively be discussed with others. The currently available visualization techniques are a 2D density plot, a scanpath representation, a bee swarm, and a scarf plot, all supporting several standard interaction techniques. Moreover, due to the linking feature, users can select data in one visualization, and the same data points will be highlighted in all active visualizations for solving comparison tasks. The tool also provides functions that make it possible to upload both, private or public data sets, and can generate URLs to share the data and settings of customized visualizations. A user study showed that the tool is understandable and that providing linked customizable views is beneficial for analyzing eye movement data.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael (2021): Session 6: Posters. Chair. 14th International Symposium on Visual Information (VINCI). Hasso Plattner Institute. Computer Graphics Systems Group. Online, 7. September, 2021

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael; Wallner, Günter; van de Wetering, Huub; Rooks, Freek; Morra, Olof (2021): Visual Analysis of Graph Algorithm Dynamics. Short Paper. 14th International Symposium on Visual Information (VINCI). Hasso Plattner Institute. Computer Graphics Systems Group. Online, 7. September, 2021

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael (2021): Session 3: Comics, Narratives, and Evaluation. Chair. 14th International Symposium on Visual Information (VINCI). Hasso Plattner Institute. Computer Graphics Systems Group. Online, 6. September, 2021

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Kumar, Ayush; Goel, Bharat; Rajupet Premkumar, Keshav; Burch, Michael; Müller, Klaus (2021): EyeFIX: An Interactive Visual Analytics Interface for Eye Movement Analysis. Short Paper. 14th International Symposium on Visual Information (VINCI). Hasso Plattner Institute. Computer Graphics Systems Group. Online, 6. September, 2021

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Schmid, Marco; Haymoz, Rahel; Schiller, David; Burch, Michael (2021): Identifying Correlation Patterns in Large Educational Data Sources. Short Paper. 14th International Symposium on Visual Information (VINCI). Hasso Plattner Institute. Computer Graphics Systems Group. Online, 7. September, 2021

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael; Melby, Elisabeth (2020): What more than a hundred project groups reveal about teaching visualization. In: Journal of Visualization 23, S. 895-911. Available online at https://doi.org/10.1007/s12650-020-00659-6, last checked on 07.05.2021

     

    Abstract: The growing number of students can be a challenge for teaching visualization lectures, supervision, evaluation, and grading. Moreover, designing visualization courses by matching the different experiences and skills of the students is a major goal in order to find a common solvable task for all of them. Particularly, the given task is important to follow a common project goal, to collaborate in small project groups, but also to further experience, learn, or extend programming skills. In this article, we survey our experiences from teaching 116 student project groups of 6 bachelor courses on information visualization with varying topics. Moreover, two teaching strategies were tried: 2 courses were held without lectures and assignments but with weekly scrum sessions (further denoted by TS1) and 4 courses were guided by weekly lectures and assignments (further denoted by TS2). A total number of 687 students took part in all of these 6 courses. Managing the ever growing number of students in computer and data science is a big challenge in these days, i.e., the students typically apply a design-based active learning scenario while being supported by weekly lectures, assignments, or scrum sessions. As a major outcome, we identified a regular supervision either by lectures and assignments or by regular scrum sessions as important due to the fact that the students were relatively unexperienced bachelor students with a wide range of programming skills, but nearly no visualization background. In this article, we explain different subsequent stages to successfully handle the upcoming problems and describe how much supervision was involved in the development of the visualization project. The project task description is given in a way that it has a minimal number of requirements but can be extended in many directions while most of the decisions are up to the students like programming languages, visualization approaches, or interaction techniques. Finally, we discuss the benefits and drawbacks of both teaching strategies.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael; Veneri, Alberto; Sun, Bangjie (2020): Exploring eye movement data with image-based clustering. In: Journal of Visualization 23, S. 677-694. Available online at https://doi.org/10.1007/s12650-020-00656-9, last checked on 07.05.2021

     

    Abstract: In this article, we describe a new feature for exploring eye movement data based on image-based clustering. To reach this goal, visual attention is taken into account to compute a list of thumbnail images from the presented stimulus. These thumbnails carry information about visual scanning strategies, but showing them just in a space-filling and unordered fashion does not support the detection of patterns over space, time, or study participants. In this article, we present an enhancement of the EyeCloud approach that is based on standard word cloud layouts adapted to image thumbnails by exploiting image information to cluster and group the thumbnails that are visually attended. To also indicate the temporal sequence of the thumbnails, we add color-coded links and further visual features to dig deeper in the visual attention data. The usefulness of the technique is illustrated by applying it to eye movement data from a formerly conducted eye tracking experiment investigating route finding tasks in public transport maps. Finally, we discuss limitations and scalability issues of the approach.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael; Bennema ten Brinke, Kiet; Castella, Adrien; Karray, Ghassen; Peters, Sebastiaan; Shteriyanov, Vasil; Vlasvinkel, Rinse (2020) : Guiding graph exploration by combining layouts and reorderings In: Nguyen, Quang Vinh; Zhao, Ying; Burch, Michael; Westenberg, Michel (Hg.): The 13th International Symposium on Visual Information Communication and Interaction: Proceedings: VINCI: Eindhoven, 8. - 10. Dezember: New York: Association for Computing Machinery, S. 25:1-25:5. Available online at https://doi.org/10.1145/3430036.3430064, last checked on 07.05.2021

     

    Abstract: Visualizing graphs is a challenging task due to the various properties of the underlying relational data. For sparse and small graphs the perceptually most efficient way are node-link diagrams whereas for dense graphs with attached data, adjacency matrices might be the better choice. Since graphs can contain both properties, being globally sparse and locally dense, a combination of several visualizations is beneficial. In this paper we describe a visually and algorithmically scalable approach to provide views and perspectives about graphs as interactively linked node-link as well as adjacency matrix visualizations. The novelty of the technique is that insights like clusters or anomalies from one or several combined views can be used to influence the layout or reordering of the others. Moreover, the importance of nodes and node groups can be detected, computed, and visualized by taking into account several layout and reordering properties in combination as well as different edge properties for the same set of nodes. We illustrate the usefulness of our tool by applying it to graph datasets like co-authorships, co-citations, and a CPAN distribution.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael; Vramulet, Adrian; Thieme, Alex; Vorobiova, Alina; Shehu, Denis; Miulescu, Mara; Farsadyar, Mehrdad; van Krieken, Tar (2020) : VizWick. A multiperspective view of hierarchical data In: Nguyen, Quang Vinh; Zhao, Ying; Burch, Michael; Westenberg, Michel (Hg.): The 13th International Symposium on Visual Information Communication and Interaction: Proceedings: VINCI: Eindhoven, 8. - 10. Dezember: New York: Association for Computing Machinery, S. 23:1-23:5. Available online at https://doi.org/10.1145/3430036.3430044, last checked on 07.05.2021

     

    Abstract: In this paper we present a web-based interactive tool for visualizing hierarchical data. Our main purpose is to facilitate the visualization of datasets. We achieve this by offering VizWick in a browser environment, with no requirement of additional software. We provide the option to view the same dataset from multiple coordinated perspectives, thus providing the possibility to gain more analytical insight than if the dataset was visualized in a single view. We focus on several hierarchy visualization techniques which can be either in 2D, 3D, or a virtual reality environment. The choice of programming language is JavaScript, with the aid of PixiJS and Three.js libraries. We demonstrate the usefulness of our tool by applying it to the NCBI taxonomy, a hierarchically structured dataset which contains over 300,000 elements.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael; Kuipers, Tos; Qian, Chen; Zhou, Fangqin (2020) : Comparing dimensionality reductions for eye movement data In: Nguyen, Quang Vinh; Zhao, Ying; Burch, Michael; Westenberg, Michel (Hg.): The 13th International Symposium on Visual Information Communication and Interaction: Proceedings: VINCI: Eindhoven, 8. - 10. Dezember: New York: Association for Computing Machinery, S. 18:1-18:5. Available online at https://doi.org/10.1145/3430036.3430049, last checked on 07.05.2021

     

    Abstract: Eye movement data is high-dimensional, and therefore hard to visualize. In this paper we focus on a dataset of scanpaths: Eye movements performed by subjects and tracked during a task which is based on path-finding. We describe comparisons of different approaches of dimensionality reduction applied to eye movement data, including t-distributed stochastic neighbor embedding (t-SNE), uniform manifold approximation and projection (UMAP), principal component analysis (PCA), and metric multidimensional scaling (MDS). We describe a tool created to analyze and compare these different methods, and perform a case study in which we explore an eye movement dataset.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael; van Lith, John; van de Waterlaat, Nick; van Winden, Jurrien (2020) : Voronoier. From images to Voronoi diagrams In: Nguyen, Quang Vinh; Zhao, Ying; Burch, Michael; Westenberg, Michel (Hg.): The 13th International Symposium on Visual Information Communication and Interaction: Proceedings: VINCI: Eindhoven, 8. - 10. Dezember: New York: Association for Computing Machinery, S. 16:1-16:9. Available online at https://doi.org/10.1145/3430036.3430043, last checked on 07.05.2021

     

    Abstract: We describe an interactive application for transforming an image into a Voronoi diagram. We combine a variety of methods for generating a spatial point cloud from an input image. In addition, several methods for pruning the spatial point cloud are introduced. These pruning methods can significantly reduce the computation time needed for the transformation. A Voronoi diagram can be constructed from the pruned spatial point cloud using either a naive approach or a Delaunay triangulation. Moreover, an order-k Voronoi diagram can be constructed using this naive approach. We introduce many configuration parameters and we integrate interactivity in the Voronoi diagram by giving users the ability to manually add and remove centroids. To make the application accessible to everyone, we provide a web-based solution by using the Vue.js framework based on the JavaScript programming language. This solution supports the transformation from an image to a Voronoi diagram in a browser, and hence, has the advantage of not being restricted to a certain kind of environment. We illustrate the usefulness of our application by applying it to several images.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael; Saeed, Abdullah; Vorobiova, Alina; Zahedani, Armin Memar (2020) : eDBLP. Visualizing scientific publications In: Nguyen, Quang Vinh; Zhao, Ying; Burch, Michael; Westenberg, Michel (Hg.): The 13th International Symposium on Visual Information Communication and Interaction: Proceedings: VINCI: Eindhoven, 8. - 10. Dezember: New York: Association for Computing Machinery, S. 9:1-9:8. Available online at https://doi.org/10.1145/3430036.3430052, last checked on 07.05.2021

     

    Abstract: In this paper we describe an approach for visualizing the textual information archived in the DBLP and the static and dynamic relations contained in it. Those relations are existing between authors and co-authors, between keywords, but also between authors and keywords. Visually representing them provides a way to quickly get an overview about emerging or disappearing topics as well as researchers and researcher groups. To reach our goal we apply node-link diagrams, word clouds, heatmaps, and area plots to the preprocessed and transformed DBLP data. All visualizations are equipped with interaction techniques and are built by using the functionality of the Bokeh library in Python, which enables the users to run the eDBLP in a web browser and to explore the dataset in an interactive and intuitive way. Finally, we discuss limitations and scalability issues of our approach.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael; van de Wetering, Huub; Klaassen, Nico (2020) : Multiple linked perspectives on hierarchical data In: Nguyen, Quang Vinh; Zhao, Ying; Burch, Michael; Westenberg, Michel (Hg.): The 13th International Symposium on Visual Information Communication and Interaction: Proceedings: VINCI: Eindhoven, 8. - 10. Dezember: New York: Association for Computing Machinery, S. 3:1-3:8. Available online at https://doi.org/10.1145/3430036.3430037, last checked on 07.05.2021

     

    Abstract: This paper describes an interactive web-based tool for visualizing hierarchical data including the recently developed concept of space-reclaiming icicle plots and several more traditional hierarchy visualizations. The tool provides ways to upload, share, explore, and compare hierarchical data using a multitude of different linked hierarchy visualizations. The current version supports up to 8 hierarchy visualizations, with the space-reclaiming icicle plots among them. Several of the visualizations can be shown in linked views while typical hierarchy parameters and visual variables can be changed on user demand. The interactive tool makes use of OpenGL and Angular, an industry standard JavaScript platform, and runs in a web browser. We illustrate the usefulness of the visualization tool by applying it to the NCBI taxonomy that consists of more than 300,000 hierarchically organized species while filtering for the tetrapoda subhierarchy. Finally, we explain implementation details and discuss limitations and scalability issues of the linked visualization techniques.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael; Kurzhals, Kuno (2020) : Visual Analysis of Eye Movements During Game Play In: Bulling, Andreas; Huckauf, Anke; Eakta, Jain; Radach, Ralph; Weiskopf, Daniel (Hg.): Symposium on Eye Tracking Research and Applications: Short Papers: ETRA 2020: Online, 2. - 5. Juni: New York: Association for Computing Machinery (ACM), S. 59:1-59:5. Available online at https://doi.org/10.1145/3379156.3391839, last checked on 07.05.2021

     

    Abstract: Eye movements indicate visual attention and strategies during game play, regardless of whether in board, sports, or computer games. Additional factors such as individual vs. group play and active playing vs. observing game play further differentiate application scenarios for eye movement analysis. Visual analysis has proven to be an effective means to investigate and interpret such highly dynamic spatio-temporal data. In this paper, we contribute a classification strategy for different scenarios for the visual analysis of gaze data during game play. Based on an initial sample of related work, we derive multiple aspects comprising data sources, game mode, player number, player state, analysis mode, and analysis goal. We apply this classification strategy to describe typical analysis scenarios and research questions as they can be found in related work. We further discuss open challenges and research directions for new application scenarios of eye movements in game play.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael; Timmermans, Neil (2020) : Sankeye. A Visualization Technique for AOI Transitions In: Bulling, Andreas; Huckauf, Anke; Eakta, Jain; Radach, Ralph; Weiskopf, Daniel (Hg.): Symposium on Eye Tracking Research and Applications: Short Papers: ETRA 2020: Online, 2. - 5. Juni: New York: Association for Computing Machinery (ACM), S. 48:1-48:5. Available online at https://doi.org/10.1145/3379156.3391833, last checked on 07.05.2021

     

    Abstract: Visually exploring AOI transitions aggregated from a group of eye tracked people is a challenging task. Many visualizations typically produce visual clutter or aggregate the temporal or visit order information in the data hiding the visual task solution strategies for the observer. In this paper we introduce the Sankeye technique that is based on the visual metaphor of Sankey diagrams applied to eye movement data, hence the name Sankeye. The technique encodes the frequencies of AOI transitions into differently thick rivers and subrivers. The distributions of the AOI transitions are visually represented by splitting and merging subrivers in a left-to-right reading direction. The technique allows to interactively adapt the number of predefined AOIs as well as the transition frequency number threshold with the goal to derive patterns and insights from eye movement data.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael (2020) : Teaching Eye Tracking Visual Analytics in Computer and Data Science Bachelor Courses In: Bulling, Andreas; Huckauf, Anke; Eakta, Jain; Radach, Ralph; Weiskopf, Daniel (Hg.): Symposium on Eye Tracking Research and Applications: Full Papers: ETRA 2020: Online, 2. - 5. Juni: New York: Association for Computing Machinery (ACM), S. 17:1-17:9. Available online at https://doi.org/10.1145/3379155.3391331, last checked on 03.09.2021

     

    Abstract: Making students aware of eye tracking technologies can have a great benefit on the entire application field since they may build the next generation of eye tracking researchers. On the one hand students learn the usefulness and benefits of this technique for different scientific purposes like user evaluation to find design flaws or visual attention strategies, gaze-assisted interaction to enhance and augment traditional interaction techniques, or as a means to improve virtual reality experiences. However, on the other hand, the large amount of recorded data means a challenge for data analytics in order to find rules, patterns, but also anomalies in the data, finally leading to insights and knowledge to understand or predict eye movement patterns which can have synergy effects for both disciplines - eye tracking and visual analytics. In this paper we will describe the challenges of teaching eye tracking combined with visual analytics in a computer and data science bachelor course with 42 students in an active learning scenario following four teaching stages. Some of the student project results are shown to demonstrate learning outcomes with respect to eye tracking data analysis and visual analytics techniques.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael (2020) : The Importance of Requirements Engineering for Teaching Large Visualization Courses: Fourth International Workshop on Learning from Other Disciplines for Requirements Engineering: Proceedings: D4RE: Zürich, 31. August. IEEE Computer Society: Los Alamitos, Washington, Tokyo: Conference Publishing Services, S. 6-10. Available online at https://doi.org/10.1109/D4RE51199.2020.00007, last checked on 07.05.2021

     

    Abstract: Teaching visualization courses typically requires some kind of small-scale software project in which the students collaboratively create a software product with the goal to practically apply visualization concepts. This software focuses on providing an interactive solution to a given dataset scenario supporting the detection of insights based on users' tasks at hand. However, while creating such a tool several software development stages have to be taken into account, with requirements engineering as one of the major repeating stages. In this paper we describe three different visualization project-based teaching strategies TS_H, TS_M, and TS_L depending on the degree of freedom for the student groups. Moreover, they differ in the way how requirements engineering is an inherent ingredient in order to design, implement, test, deploy, evaluate, maintain, and evolve a certain software product. We experienced with more than 1,000 students in a total of 9 courses running over 8 weeks each, in approximately 2 years of time. All of the three teaching strategies have their own benefits and drawbacks, however, requirements engineering is considered more or less important, mostly depending on the degree of freedom the student project groups got during the courses. As a major outcome we consider the courses successful, not only because we learned the importance of requirements engineering for teaching project-based visualization courses, but also because many students passed the courses, the student evaluation was quite positive, and the written reports showed that they had an immense learning effect.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael; Staudt, Yves; Frommer, Sina; Uttenweiler, Janis; Grupp, Peter; Hähnle, Steffen, Scheytt, Josia; Kloos, Uwe (2020) : PasVis. Enhancing public transport maps with interactive passenger data visualizations In: Nguyen, Quang Vinh; Zhao, Ying; Burch, Michael; Westenberg, Michel (Hg.): The 13th International Symposium on Visual Information Communication and Interaction: Proceedings: VINCI: Eindhoven, 8. - 10. Dezember: New York: Association for Computing Machinery, S. 13:1-13.8. Available online at https://doi.org/10.1145/3430036.3430061, last checked on 07.05.2021

     

    Abstract: Public transport maps are typically designed in a way to support route finding tasks for passengers while they also provide an overview about stations, metro lines, and city-specific attractions. Most of those maps are designed as a static representation, maybe placed in a metro station or printed in a travel guide. In this paper we describe a dynamic, interactive public transport map visualization enhanced by additional views for the dynamic passenger data on different levels of temporal granularity. Moreover, we also allow extra statistical information in form of density plots, calendar-based visualizations, and line graphs. All this information is linked to the contextual metro map to give a viewer insights into the relations between time points and typical routes taken by the passengers. We illustrate the usefulness of our interactive visualization by applying it to the railway system of Hamburg in Germany while also taking into account the extra passenger data. As another indication for the usefulness of the interactively enhanced metro maps we conducted a user experiment with 20 participants.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael (2020): Graph-Related Properties for Comparing Dynamic Call Graphs. In: Journal of Computer Languages. Available online at https://doi.org/10.1016/j.cola.2020.100967, last checked on 03.09.2021

     

    Abstract: Software systems produce long sequences of call graphs, in particular, if the graphs are generated during runtime and not revision by revision. Visualizing, analyzing, and interacting with such long dynamic graphs with respect to different properties is a challenging task. In this article we describe an interactive visualization technique for dynamic call graphs that supports the observation of the data in vertex, edge, and time dimensions based on properties related to the graph topology, inherent vertex hierarchy, involved links, and graph-theoretic problems. Moreover, we provide a time-aligned view on several dynamic graphs with the goal to compare them visually. We also provide standard node-link diagrams for individual graphs or aggregated dynamic graph subsequences as a details-on-demand technique and for supporting graph comparisons on different temporal granularities. We illustrate the usefulness of the dynamic graph visualization by applying it to the call relations at runtime of the open source software project JHotDraw. We evaluated the interactive visualization by reflecting on the static and dynamic patterns we could identify in the dataset by changing the graph properties under exploration. Moreover, we conducted a controlled user study with 20 participants investigating three typical tasks like finding graph sequences, identifying a complete graph, and exploring the reason for a change in a shortest path algorithm. Finally, we discuss scalabilities and limitations of our approach.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael; Wallner, Günter; Arends, Sven T. T.; Beri, Puneet (2020) : Procedural City Modeling for AR Applications: Information Visualisation: AI & Analytics, Biomedical Visualization, Builtviz, and Geometric Modelling & Imaging: Proceedings: 24th International Conference on Information Visualisation (IV): Melbourne, 7. - 11. September: Piscataway, NJ: Institute of Electrical and Electronic Engineers (IEEE), S. 581-586. Available online at https://doi.org/10.1109/IV51561.2020.00098, last checked on 03.09.2021

     

    Abstract: In this paper we present a procedural city modeling approach which combines real-world street data and the multi-nuclei theory to generate believable cities. Our approach performs a partitioning of a city into urban zones based on Perlin noise and different parameters that are adjustable by the user. Our method is efficient enough to be used for augmented reality applications to be run on devices with limited processing capabilities such as smartphones. The approach can be used to build applications for professional urban planners and entertainment purposes. We illustrate the usefulness of our approach by applying it to data of three cities. Moreover, we provide implementation details and discuss challenges and limitations of the technique in the light of several application scenarios.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael; Wallner, Günter; Lazar Angelescu, Sergiu; Lakatos, Peter (2020) : Visual Analysis of FIFA World Cup Data: Information Visualisation: AI & Analytics, Biomedical Visualization, Builtviz, and Geometric Modelling & Imaging: Proceedings: 24th International Conference on Information Visualisation (IV): Melbourne, 7. - 11. September: Piscataway, NJ: Institute of Electrical and Electronic Engineers (IEEE), S. 114-119. Available online at https://doi.org/10.1109/IV51561.2020.00028, last checked on 09.09.2021

     

    Abstract: Soccer is one of the most popular sports in the world, played by thousands of professionals and amateurs every week. Consequently, it is no surprise that it generates an enormous amount of data. In today's data-driven world it is essential to find an optimal, self-explanatory, way to present the data in a way to be able to derive visual patterns that relate to the underlying data patterns. In this paper, we describe an interactive visualization for analyzing soccer data and identifying patterns, correlations, and insights. We illustrate the usefulness of our approach, especially targeted towards non-visualization experts, by applying it to World Cup data and by discussing potential use cases.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Hu, Ya Ting; Burch, Michael; van de Wetering, Huub (2020) : Visualizing dynamic graphs with heat triangles In: Nguyen, Quang Vinh; Zhao, Ying; Burch, Michael; Westenberg, Michel (Hg.): The 13th International Symposium on Visual Information Communication and Interaction: Proceedings: VINCI: Eindhoven, 8. - 10. Dezember: New York: Association for Computing Machinery, S. 7:1-7:8. Available online at https://doi.org/10.1145/3430036.3430053, last checked on 07.05.2021

     

    Abstract: In this paper an overview-based interactive visualization for temporally long dynamic graph sequences is described. To reach this goal, each graph can be mapped to a certain value based on a given property. Among others, a property can be number of vertices, number of edges, average degree, density, number of self-loops, degree (maximum and total), or edge weight (minimum, maximum, and total). To achieve an overview over time, an aggregation strategy based on either the mean, minimum, or maximum of two values is applied. This temporal value aggregation generates a triangular shape with an overview of the entire graph sequence as the peak. The color coding can be adjusted, forming visual patterns that can be rapidly explored for certain data features over time, supporting comparison tasks between the properties. The usefulness of the approach is illustrated by means of applying it to dynamic graphs generated from US domestic flight data.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Kurzhals, Kuno; Burch, Michael; Weiskopf, Daniel (2020): What We See and What We Get from Visualization. Eye Tracking Beyond Gaze Distributions and Scanpaths. Position Paper. Available online at https://arxiv.org/abs/2009.14515, last checked on 07.05.2021

     

    Abstract: Technical progress in hardware and software enables us to record gaze data in everyday situations and over long time spans. Among a multitude of research opportunities, this technology enables visualization researchers to catch a glimpse behind performance measures and into the perceptual and cognitive processes of people using visualization techniques. The majority of eye tracking studies performed for visualization research is limited to the analysis of gaze distributions and aggregated statistics, thus only covering a small portion of insights that can be derived from gaze data. We argue that incorporating theories and methodology from psychology and cognitive science will benefit the design and evaluation of eye tracking experiments for visualization. This position paper outlines our experiences with eye tracking in visualization and states the benefits that an interdisciplinary research field on visualization psychology might bring for better understanding how people interpret visualizations.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • van de Wetering, Huub; Klaassen, Nico; Burch, Michael (2020) : Space-Reclaiming Icicle Plots In: Beck, Fabian; Seo, Jinwook; Wang, Chaoli (Hg.): IEEE Pacific Visualization Symposium: Proceedings: PacificVis: Online, 3. - 5. Juni: Piscataway, NJ: Institute of Electrical and Electronic Engineers (IEEE), S. 121-130. Available online at https://doi.org/10.1109/PacificVis48177.2020.4908, last checked on 03.09.2021

     

    Abstract: This paper describes the space-reclaiming icicle plots, hierarchy visualizations based on the visual metaphor of icicles. As a novelty, our approach tries to reclaim empty space in all hierarchy levels. This reclaiming results in an improved visibility of the hierarchy elements especially those in deeper levels. We implemented an algorithm that is capable of producing more space-reclaiming icicle plot variants. Several visual parameters can be tweaked to change the visual appearance and readability of the plots: among others, a space-reclaiming parameter, an empty space shrinking parameter, and a gap size. To illustrate the usefulness of the novel visualization technique we applied it, among others, to an NCBI taxonomy dataset consisting of more than 300,000 elements and with maximum depth 42. Moreover, we explore the parameter and design space by applying several values for the visual parameters. We also conducted a controlled user study with 17 participants and received qualitative feedback from 112 students from a visualization course.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Vidyapu, Sandeep; Saradhi Vedula, Vijaya; Burch, Michael; Bhattacharya, Samit (2020) : Attention-based Cross-Modal Unification of Visualized Text and Image Features. Understanding the influence of interface and user idiosyncrasies on unification for free-viewing In: Bulling, Andreas; Huckauf, Anke; Eakta, Jain; Radach, Ralph; Weiskopf, Daniel (Hg.): Symposium on Eye Tracking Research and Applications: Adjunct Proceedings: ETRA 2020: Online, 2. - 5. Juni: New York: Association for Computing Machinery (ACM), S. 29:1-29:9. Available online at https://doi.org/10.1145/3379157.3391303, last checked on 03.09.2021

     

    Abstract: The attentional analysis on graphical user interfaces (GUIs) is shifting from Areas-of-Interest (AOIs) to Data-of-Interest (DOI). However, the heterogeneity data modalities on GUIs hinder the DOI-based analyses. To overcome this limitation, we present a Canonical Correlation Analysis (CCA) based approach to unify the heterogeneous modalities (text and images) concerning user attention. Especially, the influence of interface and user idiosyncrasies in establishing the cross-modal correlation is studied. The performance of the proposed approach is analyzed for free-viewing eye-tracking experiments conducted on bi-modal webpages. The results reveal: (i) Cross-modal text and image visual features are correlated when the interface idiosyncrasies, alone or along with user idiosyncrasies, are constrained. (ii) The font-families of text are comparable to color histogram visual features of images in drawing the users’ attention. (iii) Text and image visual features can delineate the attention of each other. Our approach finds applications in user-oriented webpage rendering and computational attention modeling.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Bruder, Valentin; Ben Lahmar, Houssem; Hlawatsch, Marcel; Frey, Steffen; Burch, Michael; Weiskopf, Daniel; Herschel, Melanie; Ertl, Thomas (2019): Volume-based large dynamic graph analysis supported by evolution provenance. In: Multimedia Tools and Applications 78, S. 32939-32965. Available online at https://doi.org/10.1007/s11042-019-07878-6, last checked on 09.09.2021

     

    Abstract: We present an approach for the visualization and interactive analysis of dynamic graphs that contain a large number of time steps. A specific focus is put on the support of analyzing temporal aspects in the data. Central to our approach is a static, volumetric representation of the dynamic graph based on the concept of space-time cubes that we create by stacking the adjacency matrices of all time steps. The use of GPU-accelerated volume rendering techniques allows us to render this representation interactively. We identified four classes of analytics methods as being important for the analysis of large and complex graph data, which we discuss in detail: data views, aggregation and filtering, comparison, and evolution provenance. Implementations of the respective methods are presented in an integrated application, enabling interactive exploration and analysis of large graphs. We demonstrate the applicability, usefulness, and scalability of our approach by presenting two examples for analyzing dynamic graphs. Furthermore, we let visualization experts evaluate our analytics approach.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael; Melby, Elisabeth (2019) : Teaching and Evaluating Collaborative Group Work in Large Visualization Courses In: Wang, Changbo; Burch, Michael; Krone, Michael (Hg.): The 12th International Symposium on Visual Information Communication and Interaction: Proceedings: VINCI: Shanghai, 20. - 22. September: New York: Association for Computing Machinery (ACM), S. 17:1-17:8. Available online at https://doi.org/10.1145/3356422.3356447, last checked on 09.09.2021

     

    Abstract: The growing number of students can be a challenge for teaching visualization lectures, supervision, evaluation, and grading. Moreover, designing a visualization course, matching the different experiences and skills of the students is one major goal in order to find a common solvable task for all of the students. However, the given task is important to follow a common goal, to collaborate in small project groups, but also to further experience, learn, or extend programming skills. In this paper we describe an approach to manage a large number of 272 students in a design-based active learning course who were relatively unexperienced first year bachelor students with a wide range of programming skills. We explain different subsequent stages to successfully handle the upcoming problems and describe how many and to which extent supervisors are involved in the development of the project. The project task description is given in a way that it has a minimal number of requirements but can be extended in many directions while most of the decisions are up to the students like programming languages, visualization approaches, or interaction techniques. Finally, we discuss the benefits and drawbacks of our teaching strategy.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael; Veneri, Alberto; Sun, Bangjie (2019) : EyeClouds. A Visualization and Analysis Tool for Exploring Eye Movement Data In: Wang, Changbo; Burch, Michael; Krone, Michael (Hg.): The 12th International Symposium on Visual Information Communication and Interaction: Proceedings: VINCI: Shanghai, 20. - 22. September: New York: Association for Computing Machinery (ACM), S. 8:1-8:8. Available online at https://doi.org/10.1145/3356422.3356423, last checked on 09.09.2021

     

    Abstract: In this paper, we discuss and evaluate the advantages and disadvantages of several techniques to visualize and analyze eye movement data tracked and recorded from public transport map viewers in a formerly conducted eye tracking experiment. Such techniques include heat maps and gaze stripes. To overcome the disadvantages and improve the effectiveness of those techniques, we present a viable solution that makes use of existing techniques such as heat maps and gaze stripes, as well as attention clouds which are inspired by the general concept of word clouds. We also develop a web application with interactive attention clouds, named the EyeCloud, to put theory into practice. The main objective of this paper is to help public transport map designers and producers gain feedback and insights on how the current design of the map can be further improved, by leveraging on the visualization tool. In addition, this visualization tool, the EyeCloud, can be easily extended to many other purposes with various types of data. It could be possibly applied to entertainment industries, for instance, to track the attention of the film audiences in order to improve the advertisements.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael; Aerts, Willem; Bon, Daan; MacCarren, Sean; Rothuizen, Laurent; Smet, Olivier; Wöltgens, Daan (2019) : Combining Interactive Hierarchy Visualizations in a Web-based Application In: Kerren, Andreas; Hurter, Christophe; Braz, Jose (Hg.): 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications: Proceedings. Volume 3: VISIGRAPP 2019 (IVAPP): Prag, 25. - 27. Februar: Setúbal: SciTePress, S. 191-198. Available online at https://doi.org/10.5220/0007307701910198, last checked on 09.09.2021

     

    Abstract: In this paper we describe a web-based tool combining several hierarchy visualization techniques. Those run in a browser and support the communication of hierarchy data that is omnipresent in many application fields like biology, software engineering, sports, or in algorithmic approaches like hierarchical clustering. To this end we provide node-link diagrams, Pythagoras trees, circular, as well as 3D treemaps also called 3D step-trees to give several visual perspectives on the same data and to improve data exploration tasks. The visualizations are interactive and linked, while the tool is available online, making it easily accessible for people all around the world without installing extra software or relying on additional libraries and frameworks. Hierarchy datasets can be uploaded to a server and shared with others. The visualizations were primarily implemented using JavaScript, and more specifically, rendered using the D3.js library. We illustrate the usefulness of the interactive visualization by applying them to the NCBI taxonomy and the Influenza dataset.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael; Kumar, Ayush; Timmermans, Neil (2019) : An interactive web-based visual analytics tool for detecting strategic eye movement patterns: Symposium on Eye Tracking Research and Applications: Proceedings: ETRA 2019: Denver, 25. - 28. Juni: New York: Association for Computing Machinery (ACM), S. 93:1-93:5. Available online at https://doi.org/10.1145/3317960.3321615, last checked on 09.09.2021

     

    Abstract: In this paper we describe an interactive and web-based visual analytics tool combining linked visualization techniques and algorithmic approaches for exploring the hierarchical visual scanning behavior of a group of people when solving tasks in a static stimulus. This has the benefit that the recorded eye movement data can be observed in a more structured way with the goal to find patterns in the common scanning behavior of a group of eye tracked people. To reach this goal we first preprocess and aggregate the scanpaths based on formerly defined areas of interest (AOIs) which generates a weighted directed graph. We visually represent the resulting AOI graph as a modified hierarchical graph layout. This can be used to filter and navigate in the eye movement data shown in a separate view overplotted on the stimulus for preserving the mental map and for providing an intuitive view on the semantics of the original stimulus. Several interaction techniques and complementary views with visualizations are implemented. Moreover, due to the web-based nature of the tool, users can upload, share, and explore data with others. To illustrate the usefulness of our concept we apply it to real-world eye movement data from a formerly conducted eye tracking experiment.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael (2019) : Interaction graphs. Visual analysis of eye movement data from interactive stimuli: Symposium on Eye Tracking Research and Applications: Proceedings: ETRA 2019: Denver, 25. - 28. Juni: New York: Association for Computing Machinery (ACM), S. 89:1-89:5. Available online at ttps://doi.org/10.1145/3317960.3321617, last checked on 09.09.2021

     

    Abstract: Eye tracking studies have been conducted to understand the visual attention in different scenarios like, for example, how people read text, which graphical elements in a visualization are frequently attended, how they drive a car, or how they behave during a shopping task. All of these scenarios - either static or dynamic - show a visual stimulus in which the spectators are not able to change the visual content they see. This is different if interaction is allowed like in (graphical) user interfaces (UIs), integrated development environments (IDEs), dynamic web pages (with different user-defined states), or interactive displays in general as in human-computer interaction, which gives a viewer the opportunity to actively change the stimulus content. Typically, for the analysis and visualization of time-varying visual attention paid to a web page, there is a big difference for the analytics and visualization approaches - algorithmically as well as visually - if the presented web page stimulus is static or dynamic, i.e. time-varying, or dynamic in the sense that user interaction is allowed. In this paper we discuss the challenges for visual analysis concepts in order to analyze the recorded data, in particular, with the goal to improve interactive stimuli, i.e., the layout of a web page, but also the interaction concept. We describe a data model which leads to interaction graphs, a possible way to analyze and visualize this kind of eye movement data.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael; Kumar, Ayush; Müller, Klaus; Kervezee, Titus; Nuijten, Wouter; Oostenbach, Rens; Peeters, Lucas; Smit, Gijs (2019) : Finding the outliers in scanpath data: Symposium on Eye Tracking Research and Applications: Proceedings: ETRA 2019: Denver, 25. - 28. Juni: New York: Association for Computing Machinery (ACM), S. 83:1-83:5. Available online at https://doi.org/10.1145/3317958.3318225, last checked on 09.09.2021

     

    Abstract: In this paper, we describe the design of an interactive visualization tool for the comparison of eye movement data with a special focus on the outliers. In order to make the tool usable and accessible to anyone with a data science background, we provide a web-based solution by using the Dash library based on the Python programming language and the Python library Plotly. Interactive visualization is very well supported by Dash, which makes the visualization tool easy to use. We support multiple ways of comparing user scanpaths like bounding boxes and Jaccard indices to identify similarities. Moreover, we support matrix reordering to clearly separate the outliers in the scanpaths. We further support the data analyst by complementary views such as gaze plots and visual attention maps.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Kumar, Ayush; Timmermans, Neil; Burch, Michael; Müller, Klaus (2019) : Clustered eye movement similarity matrices: Symposium on Eye Tracking Research and Applications: Proceedings: ETRA 2019: Denver, 25. - 28. Juni: New York: Association for Computing Machinery (ACM), S. 82:1-82:9. Available online at ttps://doi.org/10.1145/3317958.3319811, last checked on 09.09.2021

     

    Abstract: Eye movements recorded for many study participants are difficult to interpret, in particular when the task is to identify similar scanning strategies over space, time, and participants. In this paper we describe an approach in which we first compare scanpaths, not only based on Jaccard (JD) and bounding box (BB) similarities, but also on more complex approaches like longest common subsequence (LCS), Frechet distance (FD), dynamic time warping (DTW), and edit distance (ED). The results of these algorithms generate a weighted comparison matrix while each entry encodes the pairwise participant scanpath comparison strength. To better identify participant groups of similar eye movement behavior we reorder this matrix by hierarchical clustering, optimal-leaf ordering, dimensionality reduction, or a spectral approach. The matrix visualization is linked to the original stimulus overplotted with visual attention maps and gaze plots on which typical interactions like temporal, spatial, or participant-based filtering can be applied.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Kumar, Ayush; Burch, Michael; Müller, Klaus (2019) : Visually comparing eye movements over space and time: Symposium on Eye Tracking Research and Applications: Proceedings: ETRA 2019: Denver, 25. - 28. Juni: New York: Association for Computing Machinery (ACM), S. 81:1-81:9. Available online at https://doi.org/10.1145/3317958.3319810, last checked on 09.09.2021

     

    Abstract: Analyzing and visualizing eye movement data can provide useful insights into the connectivities and linkings of points and areas of interest (POIs and AOIs). Those typically time-varying relations can give hints about applied visual scanning strategies by either individual or many eye tracked people. However, the challenging issue with this kind of data is its spatio-temporal nature requiring a good visual encoding in order to first, achieve a scalable overview-based diagram, and second, to derive static or dynamic patterns that might correspond to certain comparable visual scanning strategies. To reliably identify the dynamic strategies we describe a visualization technique that generates a more linear representation of the spatio-temporal scan paths. This is achieved by applying different visual encodings of the spatial dimensions that typically build a limitation for an eye movement data visualization causing visual clutter effects, overdraw, and occlusions while the temporal dimension is depicted as a linear time axis. The presented interactive visualization concept is composed of three linked views depicting spatial, metrics-related, as well as distance-based aspects over time.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Kumar, Ayush; Tyagi, Anjul; Burch, Michael; Weiskopf, Daniel; Müller, Klaus (2019) : Task classification model for visual fixation, exploration, and search: Symposium on Eye Tracking Research and Applications: Proceedings: ETRA 2019: Denver, 25. - 28. Juni: New York: Association for Computing Machinery (ACM), S. 65:1-65:4. Available online at https://doi.org/10.1145/3314111.3323073, last checked on 09.09.2021

     

    Abstract: Yarbus' claim to decode the observer's task from eye movements has received mixed reactions. In this paper, we have supported the hypothesis that it is possible to decode the task. We conducted an exploratory analysis on the dataset by projecting features and data points into a scatter plot to visualize the nuance properties for each task. Following this analysis, we eliminated highly correlated features before training an SVM and Ada Boosting classifier to predict the tasks from this filtered eye movements data. We achieve an accuracy of 95.4% on this task classification problem and hence, support the hypothesis that task classification is possible from a user's eye movement data.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Lampprecht, Tobias; Salb, David; Mauser, Marek; van de Wetering, Huub; Burch, Michael; Kloos, Uwe (2019) : Visual Analysis of Formula One Races: Information Visualisation: Biomedical Visualization and Geometric Modelling & Imaging: Proceedings: 23rd International Conference on Information Visualisation (IV. Part I): Paris, 2. - 5. Juli: Piscataway, NJ: Institute of Electrical and Electronic Engineers (IEEE), S. 94-99. Available online at https://doi.org/10.1109/IV.2019.00025, last checked on 09.09.2021

     

    Abstract: In this paper we describe an interactive web-based visual analysis tool for Formula one races. It first provides an overview about all races on a yearly basis in a calendar-like representation. From this starting point, races can be selected and visually inspected in detail. We support a dynamic race position diagram as well as a more detailed lap times line plot for showing the drivers' lap times in comparison. Many interaction techniques are supported like selections, filtering, highlighting, color coding, or details-on-demand. We illustrate the usefulness of our visualization tool by applying it to a Formula one dataset while we describe the different dynamic visual racing patterns for a number of selected races and drivers.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Munz, Tanja; Burch, Michael; van Benthem, Toon; Poels, Yoeri; Beck, Fabian; Weiskopf, Daniel (2019) : Overlap-Free Drawing of Generalized Pythagoras Trees for Hierarchy Visualization: International Conference on Visualisation: Proceedings: IEEE Visualization Conference (VIS): Vancouver, 20. - 25. Oktober: Piscataway, NJ: Institute of Electrical and Electronic Engineers (IEEE), S. 251-255. Available online at https://doi.org/10.1109/VISUAL.2019.8933606, last checked on 09.09.2021

     

    Abstract: Generalized Pythagoras trees were developed for visualizing hierarchical data, producing organic, fractal-like representations. However, the drawback of the original layout algorithm is visual overlap of tree branches. To avoid such overlap, we introduce an adapted drawing algorithm using ellipses instead of circles to recursively place tree nodes representing the subhierarchies. Our technique is demonstrated by resolving overlap in diverse real-world and generated datasets, while comparing the results to the original approach.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Abdelaal, Moataz; Hlawatsch, Marcel; Burch, Michael; Weiskopf, Daniel (2018) : Clustering for stacked edge splatting: Vision, Modeling, and Visualization: Proceedings: VMV 2018: Stuttgart, 10. - 12. Oktober: Goslar: Eurographics Association, S. 127-134. Available online at https://doi.org/10.2312/vmv.20181262, last checked on 09.09.2021

     

    Abstract: We present a time-scalable approach for visualizing dynamic graphs. By adopting bipartite graph layouts known from parallel edge splatting, individual graphs are horizontally stacked by drawing partial edges, leading to stacked edge splatting. This allows us to uncover the temporal patterns together with achieving time-scalability. To preserve the graph structural information, we introduce the representative graph where edges are aggregated and drawn at full length. The representative graph is then placed on the top of the last graph in the (sub)sequence. This allows us to obtain detailed information about the partial edges by tracing them back to the representative graph. We apply sequential temporal clustering to obtain an overview of different temporal phases of the graph sequence together with the corresponding structure for each phase. We demonstrate the effectiveness of our approach by using real-world datasets.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Bruder, Valentin; Hlawatsch, Marcel; Frey, Steffen; Burch, Michael; Weiskopf, Daniel; Ertl, Thomas (2018) : Volume-Based Large Dynamic Graph Analytics: Information Visualisation: Biomedical Visualization, Visualisation on Built and Rural Environments & Geometric Modelling and Imaging: Proceedings: 22nd International Conference on Information Visualisation (IV): Fisciano, 10. - 13. Juli: Piscataway, NJ: Institute of Electrical and Electronic Engineers (IEEE), S. 210-219. Available online at https://doi.org/10.1109/iV.2018.00045, last checked on 09.09.2021

     

    Abstract: We present an approach for interactively analyzing large dynamic graphs consisting of several thousand time steps with a particular focus on temporal aspects. we employ a static representation of the time-varying graph based on the concept of space-time cubes, i.e., we create a volumetric representation of the graph by stacking the adjacency matrices of each of its time steps. To achieve an efficient analysis of complex data, we discuss three classes of analytics methods of particular importance in this context: data views, aggregation and filtering, and comparison. For these classes, we present a GPU-based implementation of respective analysis methods that enable the interactive analysis of large graphs. We demonstrate the utility as well as the scalability of our approach by presenting application examples for analyzing different time-varying data sets.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael (2018): Exploring density regions for analyzing dynamic graph data. In: Journal of Visual Languages & Computing, S. 133-144. Available online at https://doi.org/10.1016/j.jvlc.2017.09.007, last checked on 09.09.2021

     

    Abstract: Static or dynamic graphs are typically visualized by either node-link diagrams, adjacency matrices, adjacency lists, or hybrids thereof. In particular, for the case of a changing graph structure a viewer wishes to be able to visually compare the graphs in a sequence. Doing such a comparison task rapidly and reliably demands for visually analyzing the dynamic graph for certain dynamic patterns. In this paper we describe a novel dynamic graph visualization that is based on the concept of smooth density fields generated by first splatting the link information of a given graph in a certain layout or visual metaphor. To further visually enhance the time-varying graph structures we add user-adaptable isolines to the resulting dynamic graph representation. The computed visual encoding of the dynamic graph is aesthetically appealing due to its smooth curves and can additionally be used to do comparisons in a long graph sequence, i.e., from an information visualization perspective it serves as an overview representation supporting to start more detailed analysis processes. To demonstrate the usefulness of the technique we explore real-world dynamic graph data by taking into account visual parameters like visual metaphors, node-link layouts, smoothing iterations, number of isolines, and different color codings. In this extended work we additionally incorporate matrix and list splatting while also supporting the selection of density regions with overlaid link information. Moreover, from the selected graph the user can automatically apply region comparisons with other graphs based on global and local density properties. Such a feature is in particular useful for finding commonalities, hence serving as a special filtering function.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael; Chuang, Lewis L.; Duchowski, Andrew; Weiskopf, Daniel; Groner, Rudolf (2018): Eye Tracking and Visualization. Introduction to the Special Thematic Issue of the Journal of Eye Movement Research. In: Journal of Eye Movement Research 10. Available online at https://doi.org/10.16910/jemr.10.5.1, last checked on 09.09.2021

     

    Abstract: There is a growing interest in eye tracking technologies applied to support traditional visualization techniques like diagrams, charts, maps, or plots, either static, animated, or interactive ones. More complex data analyses are required to derive knowledge and meaning from the data. Eye tracking systems serve that purpose in combination with biological and computer vision, cognition, perception, visualization, human-computer-interaction, as well as usability and user experience research. The 10 articles collected in this thematic special issue provide interesting examples how sophisticated methods of data analysis and representation enable researchers to discover and describe fundamental spatio-temporal regularities in the data. The human visual system, supported by appropriate visualization tools, enables the human operator to solve complex tasks, like understanding and interpreting three-dimensional medical images, controlling air traffic by radar displays, supporting instrument flight tasks, or interacting with virtual realities. The development and application of new visualization techniques is of major importance for future technological progress.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael (2018) : Which Symbols, Features, and Regions Are Visually Attended in Metro Maps? In: Czarnowski, Ireneusz; Howlett, Robert J.; Jain, Lakhmi C. (Hg.): Intelligent Decision Technologies 2017: Proceedings. Part II: 9th KES International Conference on Intelligent Decision Technologies (KES-IDT 2017): Vilamoura, 21. - 23. Juni 2017: Cham: Springer International Publishing (Smart Innovation, Systems and Technologies), S. 237-246. Available online at https://doi.org/10.1007/978-3-319-59424-8_22, last checked on 09.09.2021

     

    Abstract: We conducted an eye tracking study with 40 participants to understand which visual objects like metro lines, stations, interchange points, specific symbols, or extra information like labels or legends are visually attended during a free examination question scenario. In this study we did not ask a specific question like a route finding task as in a previous eye tracking study, but we let the study participants freely inspect a displayed metro map system for 20 s each. We used 24 different metro maps with the same characteristics, but varied between color coded maps and gray scale ones. Understanding the visual scanning behavior of people while inspecting metro maps is an important, but also challenging task. But positively, the analysis of such eye movement data can support a map designer to produce better maps, in particular, to find out which regions are visually attended first or most frequently, maybe to guide the viewer. The visually attended regions and objects can be a key aspect in a metro map to make them easier and faster comprehensible and finally, useful for travellers in foreign and unknown cities all over the world. The major result from our eye tracking experiment is that the study participants significantly inspect symbols that pop out from the map like the airport signs or the map legends which belong to the key features in maps. Moreover, dense regions are more frequently attended than sparse ones. The visual attention maps of colored and gray scale maps look very similar.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael (2018) : Visual Analysis of Eye Movement Data with Fixation Distance Plots In: Czarnowski, Ireneusz; Howlett, Robert J.; Jain, Lakhmi C. (Hg.): Intelligent Decision Technologies 2017: Proceedings. Part II: 9th KES International Conference on Intelligent Decision Technologies (KES-IDT 2017): Vilamoura, 21. - 23. Juni 2017: Cham: Springer International Publishing (Smart Innovation, Systems and Technologies), S. 227-236. Available online at https://doi.org/10.1007/978-3-319-59424-8_21, last checked on 09.09.2021

     

    Abstract: Eye tracking has become an increasingly important technology in many fields of research like marketing, psychology, human-computer interaction, and also in visualization. Understanding the eye movements of people while solving a given task can be of great support to improve a visual stimulus. The challenging problem with this kind of spatio-temporal data is the difficulty to provide a useful visualization that can provide an overview about the fixations with their durations and sequential order, the saccades with their orientations and lengths, but also the distances of several fixations in space. Traditional visualizations like gaze plots - showing the stimulus in its original form overplotted with the scan paths - typically produce vast amounts of visual clutter and make a visual exploration of the eye movement data a difficult task. In this paper we introduce the fixation distance plots that place the fixation sequences to a horizontal line of color coded and differently thick circles while showing additional saccadic information. Moreover, the user can apply distance thresholds that indicate if fixations are within a certain distance allowing to get an impression about the spatial stimulus information. We illustrate the usefulness of the approach by applying it to eye movement data from a formerly conducted eye tracking experiment investigating route finding tasks in public transport maps.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael; Kumar, Ayush; Müller, Klaus (2018) : The hierarchical flow of eye movements In: Chuang, Lewis L.; Burch, Michael; Kurzhals, Kuno (Hg.): 3rd Workshop on Eye Tracking and Visualization: Proceedings: ETVIS '18: Warschau, 15. Juni: New York: Association for Computing Machinery (ACM), S. 3:1-3:5. Available online at https://doi.org/10.1145/3205929.3205930, last checked on 09.09.2021

     

    Abstract: Eye movements are composed of spatial and temporal aspects. Moreover, not only the eye movements of one subject are of interest, but a data analyst is more or less interested in the scanning strategies of a group of people in a condensed form. This data aggregation can provide useful insights into the visual attention over space and time leading to the detection of possible visual problems or design flaws in the presented stimulus. In this paper we present a way to visually explore the flow of eye movements, i.e., we try to bring a layered hierarchical structure into the spatio-temporal eye movements. To reach this goal, the stimulus is spatially divided into areas of interest (AOIs) and temporally or sequentially aggregated into time periods or subsequences. The weighted AOI transitions are used to model directed graph edges while the AOIs build the graph vertices. The flow of eye movements is naturally obtained by computing hierarchical layers for the AOIs while the downward edges indicate the hierarchical flow between the AOIs on the corresponding layers.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael (2018) : Identifying similar eye movement patterns with t-SNE: Vision, Modeling, and Visualization: Proceedings: VMV 2018: Stuttgart, 10. - 12. Oktober: Goslar: Eurographics Association, S. 111-118. Available online at https://doi.org/10.2312/vmv.20181260, last checked on 09.09.2021

     

    Abstract: In this paper we describe an approach based on the t-distributed stochastic neighbor embedding (t-SNE) focusing on projecting high-dimensional eye movement data to two dimensions. The lower-dimensional data is then represented as scatterplots reflecting the local structure of the high-dimensional eye movement data and hence, providing a strategy to identify similar eye movement patterns. The scatterplots can be used as means to interact with and to further annotate and analyze the data for additional properties focusing on space, time, or participants. Since t-SNE oftentimes produces groups of data points mapped to and overplotted in small scatterplot regions, we additionally support the modification of data point groups by a force-directed placement as a post processing in addition to t-SNE that can be run after the initial t-SNE algorithm is stopped. This spatial modification can be applied to each identified data point group independently which is difficult to integrate into a standard t-SNE approach. We illustrate the usefulness of our technique by applying it to formerly conducted eye tracking studies investigating the readability of public transport maps and map annotations.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML

  • Burch, Michael (2018) : Visual Notifier. A Timeline-Based Visualization for Notifications from Several Environments In: Klein, Karsten; Li, Yi-Na (Hg.): The 11th International Symposium on Visual Information Communication and Interaction: Proceedings: VINCI: Växjö, 13. - 15. August: New York: Association for Computing Machinery (ACM), S. 114-115. Available online at https://doi.org/10.1145/3231622.3231629, last checked on 09.09.2021

     

    Abstract: In this paper we describe the visual notifier which is a timeline-based visualization that provides an overview about incoming notifications while it also supports an easier way to manage such notifications from several environments. Moreover, it is also possible to link them by certain subject types, author names or groups, or textual content. Such a visualization is in particular useful if there are too many notifications occurring in a short time period to answer and react on immediately, or to keep track of. The interactive visualization is implemented in Java and is easily extendable by additional functionality and interaction techniques. We illustrate theusefulness bytesting itforseveralemailaddresses andmessages from Facebook, Twitter, Ebay, Linkedin, and Researchgate.

    Export record: Citavi Endnote RIS ISI BibTeX WordXML