(2017/2018) This is a work-in-progress (November 2017, May 2018) project presenting some features of the interactive virtual reality (VR) application we are developing at the VRxAR Labs research group at Linnaeus University. The prototype is influenced by our Open Data Exploration in Virtual Reality research project. In collaboration with the Department of Languages and the Department of Computer Science at Linnaeus University, we display Twitter data collected within the scope of the Nordic Tweet Stream initiative (NTS; dynamic corpus of tweets, geo-location tagged with an origin in the Nordic countries) in an immersive VR environment. The VR application allows exploration of the data both based on location and time. From a linguistic point of view, this is an interesting way to investigate potential language differences based on location and time in a more immersed scenario. On top of that, such scenarios allow VRxAR Labs to explore approaches of interface and interaction design for a VR. The presented work is a case study using data from Twitter. The VR tool is data-agnostic and able to visualise, besides tweets, data from different contexts.
Additionlly, we implemented an non-immersive (interactive) information visualization, and connected it to the immersive VR application in order to explore the NTS data collaboratively: one user in VR, and one user outside VR (October 2018). We call this approach hybrid collaborative immersive analytics, and are looking forward to continue with further studies and research on this in the future.
With Dr. Aris Alissandrakis [VRxAR Labs research group],
Prof. Mikko Laitinen, Prof. Jukka Tyrkkö, Dr. Magnus Levin [Department of Languages], and
Dr. Jonas Lundberg [Department of Computer Science]
software demonstration (ICAME 39), ODXVR x NTS poster (3rd Big Data Conference at Linnaeus University, Dec 1, 2017)
(2018) This is a work-in-progress demo video presenting a user interface design prototype within the context of the PEAR (Augmented Reality for Public Engagement, described below) framework we are developing at the VRxAR Labs research group at Linnaeus University. The interface enables users to browse and select data within an Augmented Reality (AR) environment using a virtual cube object that can be interacted with through 3D gestural input. Touching and rotating the cube horizontally and vertically, allows the user to explore the data along two conceptual dimensions, while performing other gestures can perform an action contextually related to the data displayed on the front face of the cube. The PEAR system is data-agnostic and able to use data from different sources and contexts. The prototype is implemented using Unity3D. The Manomotion SDK is used to enable 3D gestural input on an ordinary off-the-shelf smart-phone device. The server and database responsible for handling the data displayed in the prototype (cube interface) application are based on Node.js and MongoDB.
(2017/2018) Based on the experiences from the master degree project (described below), we continued our investigations towards open data exploration in virtual reality. We developed an application that is able to handle different kinds of (open) data, as long as this data is mapped to a specific data model we defined. In the video example (embedded below), data about the US presidential election 2016 is displayed. The VR application was developed using Unity3D. A companion application was developed using openFrameworks, presenting an overview of the data displayed in the VR environment. Both applications are connected using the OSC (Open Sound Control) protocol. The companion application updates live the position and field of view of the VR user. A user outside of the VR environment can highlight nodes, which are highlighted also in the VR environment, facilitating communication between the users. The video example (embedded below) illustrates room-scale VR using the HTC Vive. Two additional input prototypes have been implemented: Oculus Rift using a gamepad, and Oculus Rift with vision-based motion controls using the Leap Motion. A Node.js server and MongoDB database provide data and visual structures to both the Unity3D and openFrameworks application.
With Dr. Aris Alissandrakis [VRxAR Labs research group] Journal paper submitted
Change your Perspective: Exploration of a 3D Network created with Open Data in an Immersive Virtual Reality Environment using a Head-mounted Display and Vision-based Motion Controls
(2015) Based on the experiences from the early efforts (described below) I moved on to conduct my master degree project (30 ECTS). My thesis investigates an approach of how to naturally interact and explore information based on open data within an immersive virtual reality environment using a head-mounted display and vision-based motion controls. For this purpose, I implemented an immersive VR application visualizing information as a network of European capital cities, offering interaction through gesture input. The application lays a major focus on the exploration of the generated network and the consumption of the displayed information. While the conducted user interaction study with eleven participants investigated their acceptance of the developed prototype, estimating their workload and examining their explorative behaviour, the additional dialog with five experts in the form of explorative discussions provided further feedback towards the prototype’s design and concept. The results indicate the participants’ enthusiasm and excitement towards the novelty and intuitiveness of exploring information in a less traditional way than before, while challenging them with the applied interface and interaction design in a positive manner. The design and concept were also accepted through the experts, valuing the idea and implementation. They provided constructive feedback towards the visualization of the information as well as emphasising and encouraging to be even bolder, making more usage of the available 3D environment. Finally, the thesis discusses these findings and proposes recommendations for future work.
(2016) The PEAR framework, short for "Augmented Reality for Public Engagement", aims to allow the public to participate in discussions, or be informed about issues regarding particular areas. People can go physically to a particular location, and use their mobile devices to see on-site an Augmented Reality (AR) visualization. This AR visualization can represent results of an online voting process or data collected by various sensors, and is updated live as more people participate or new data is collected. Within this project, we intend to explore the use of AR and novel interaction techniques for public engagement.
PEAR is motivated through the objective to provide people with information / debate updates at the specific site to which the issue is related. People could also obtain such information online, but at a both geographical and psychological distance. We hope that providing people with access to "live" information in-situ will encourage participation and engagement, and also allow them to reflect from a closer, more involved perspective.
A pilot study was running over the summer of 2016 (May - September) as PEAR 4 VXO in collaboration with the Växjö kommun, related to the ongoing public debate regarding future development of the Ringsberg/Kristineberg area in Växjö. The public was invited to vote about this issue by using specific hashtags on Twitter (e.g. include #parkRK in a tweet to @vaxjokommun). People could also visit the physical location, use their mobile devices to scan an information poster that is placed there, download our app, and view (in Augmented Reality) a visualization that represents how many votes were cast for each option. This visualization was live and updated as long as people kept voting.
(2014) Virtual reality (VR) has come a long way and is finally here to stay. A lot of research has been conducted in the area of virtual environments and immersive interactions. However, applying both outside the context of games is not a trivial task and needs further exploration, especially towards modern, fast evolving Internet phenomena, such as open data or social networks. With the Oculus Rift DK2, a head-mounted display (HMD), and the Leap Motion, a vision-based motion controller to recognize hand gestures, technologies that have the potential to create immersive VR applications are available to a broad audience at comparatively low costs.
As part of the master level courses “Advanced Topics in Media Technology” (7.5 ECTS) and "Adaptive and Semantic Web" (7.5 ECTS) at the Linnaeus University I conducted fundamental background research as well as exploring interface design approaches in order to visualize content related to social networks within an immersive VR environment. After investigating the current state of the art concerning virtual reality and vision-based motion controls, I conducted a literature survey in order to gather design approaches and interaction guidelines for the creation of an immersive VR application. Deriving both conceptual and technical design, I implemented a prototype using particularly the Oculus Rift DK2, the Leap Motion controller and Unity. With the developed prototype up and running, I conducted a user interaction study to gain real life experiences and further insights. The user interaction study was focused on the participants’ perception of the presented content within VR and the vision-based motion controls in order to interact with the prototype.
With Dr. Aris Alissandrakis
(2013/2014) The application of Digital Storytelling in learning contexts is a relatively new field of research within Technology-Enhanced Learning. While mobile Digital Storytelling and thus the usage of mobile multi-touch devices offer great single user experiences, there are limitations concerning the actual physical collaboration using these devices, mainly because of the relatively small display size. Larger multi-touch screens, e.g. interactive tabletops, can overcome space limitations and provide opportunities for a comfortable co-located collaboration. The combination of mobile digital stories generated by learners together with interactive tabletops provides an innovative area for co-located collaboration and co-creation.
Consequently, in my bachelor thesis, titled "Exploring new interaction mechanisms to support information sharing and collaboration using large multi-touch displays", I evaluated the usage of Natural User Interface (NUI) and Tangible User Interface (TUI) design principles in the process of interactive co-located collaboration in technology-enhanced learning activities. For this purpose, I implemented an interactive tabletop application with Microsoft Surface in the context of digital storytelling and a storyboard-like UI design approach.
(2014) CeLeKT's current research within the JUXTALEARN project is intended to explore interactive in-situ display applications to facilitate new ways of video-based interactivities around digital displays to support curiosity and engagement throughout learner communities. Therefore a public display video application presents contextual information as well as quizzes related to videos. Additionally, the application illustrates the results of the answered quizzes visually. The public displays and thus the applications are located in schools, making them easily accessible to pupils and students. Those are provided with a mobile application in order to participate in the quizzes and to interact with the system.
As a software developer I am responsible for writing the dynamic visualization engine using modern web technologies, presenting the quiz results to the public audience.
With Maximilian Müller, Alisa Sotenko, Dr. Aris Alissandrakis, Dr. Nuno Otero and Prof. Marcelo Milrad [CeLeKT research group]
(2012) During a group project in my bachelor studies we built a robot that maps an unexplored area. The project is to be considered a prototype for a territory-exploration robot on the one hand, while investigating the possibilities and constraints of the LEGO® Mindstorms® education set on the other.
The final concept was a mapping robot that sends explored data using Bluetooth 2.0 to a tablet computer running a corresponding Android application. The app realises several tasks like storing the received data, visualising the data including showing the current position of the robot in the area, simulating the robots AI by evaluating the most efficient next steps in order to explore the area, and providing the stored data to other robot units using an public interface.
With Kerstin Günther, Thomas Zwerg, Robert Oehler, Stefan Rulewitz, José Gonzalez and Ludwig Dohrmann