Jan Fischer's Homepage - Research
  
  
This page summarizes some main research topics. For more information about individual publications follow the links or go to the Publications page.

 Stylized Augmented Reality

This work aims at finding alternative ways of presenting augmented environments to the user. Instead of the visual style found in conventional augmented reality images, the entire scene is presented in an artistic or illustrative style. This way, an interesting and entertaining alternative view of the augmented environment is generated. Moreover, the visual realism of real and virtual objects appears equalized. In typical augmented reality images, the visual realism of virtual objects differs strongly from the appearance of the camera image. By applying the same type of non-photorealistic stylization to both the camera image and the virtual objects, they appear more similar, and become ideally indistinguishable.
The main challenge for the stylized representation of AR scenes is to find rendering algorithms which can process both 3D models and a 2D video stream in real-time and fully automatically. In the original stylized augmented reality system, a combination of CPU image processing and OpenGL rendering was used to create a cartoon-like representation. Later, a method for the pointillistic representation of AR images was described. A more advanced cartoon-like postprocessing filter was one of the first such filters to run in real-time entirely on the GPU. (See also the associated TR.) In a psychophysical experiment, it was proven that real and virtual objects are less distinguishable when using a cartoon-like representation in AR. Later, the principle of stylized augmented reality was also realized using an illustration-like style. Cartoon-like, pointillistic, and illustration-like visual styles for real-time augmented reality were demonstrated in The Augmented Painting, an interactive installation in the Emerging Technologies program at SIGGRAPH 2006 in Boston.

Recently, the principle of Stylized Augmented Reality was applied to tangible user interaction. In this new system, only image areas containing objects relevant for the application are selectively stylized. This way, an unmodified visual feedback is guaranteed for the user's hand and arm, as well as the real environment seen in the background. (This system was also presented as a Sketch at SIGGRAPH 2007.) In 2008, I co-authored a survey article on Stylized Depiction in Mixed Reality. The article was published in the International Journal of Virtual Reality.

 Illustrative Visualization

Illustrative visualization techniques make it possible to create better understandable graphical representations of complex scientific and medical datasets. To this end, visual styles inspired by traditional technical or scientific illustrations are used. Illustrative visualization methods rely on abstraction; this means that unimportant image content is separated from important structures. This research aims at trying to develop methods for rendering complex three-dimensional datasets in real-time using illustrative techniques. A particular focus is on the display of hidden inner structures, and on the utilization of modern GPU techniques to make a fast and high-quality rendering possible.
An illustrative method for the visualization of hidden inner structures in polygonal datasets was presented at IEEE Visualization 2005. Recently, these principles were extended to address a more complex application case. This lead to the development of a system for the hybrid illustrative display of functional and anatomical MRI scans. As one particular aspect of this system, a number of application-specific user interaction tools were also investigated. In order to display hidden structures in polygonal models, a method for the extraction of multiple layers of the geometry is necessary. In this context, an algorithm for the real-time voxelization of polygonal models was developed. This system also provides semi-automatic shader code generation for rendering voxelized models.

 Improved Compositing for Augmented Reality

One long-standing problem of augmented reality image generation is the fact that the visual realism of virtual objects differs strongly from the appearance of the camera image. Therefore, graphical objects in an AR scene stand out from the surroundings. This work aims at trying to find better ways for the integration of virtual objects into camera images. Specifically, this work focuses on some aspects of the appearance of camera images, which had received little attention before.
A system which incorporates simulated camera image imperfections was presented as a short paper at IEEE and ACM ISMAR 2006. This work includes methods for simulation camera image noise and motion blur in real-time. (See also the associated TR.) Moreover, a simple approach to antialiasing at the border between real and virtual image regions was described. Recently, a more advanced method for real-virtual antialiasing was developed.

 Tangible User Interaction

Tangible user interfaces can liberate the user from having to use conventional computer peripherals - such as mouse, keyboard, and monitor - for interaction. In a tangible interface, real-world objects ("props") are used for interaction with a computer system. Tangible interfaces are often combined with augmented reality displays in order to overlay computer-generated information over the real environment. This research aims at investigating application-specific tangible user interfaces, as well as specialized rendering methods for tangible augmented reality.
I designed a system for untethered interaction in the ARGUS medical augmented reality framework. This system supports freely placable menu items and the definition of points as well as free-formed shapes in 3D. (A German patent was granted for this user interaction system.) Based on this user interaction technique, a method for semi-automatic volume classification in AR was developed. More recently, the principle of Stylized Augmented Reality was applied to tangible user interaction. In this new system, only image areas containing objects relevant for the application are selectively stylized. This way, an unmodified visual feedback is guaranteed for the user's hand and arm, as well as the real environment seen in the background. (This system was also presented as a Sketch at SIGGRAPH 2007.)

 Occlusion Handling in Augmented Reality

Occlusion handling is a long-standing problem in augmented reality. In monoscopic augmented reality, the depths and geometry of real objects in the environment are unknown. Virtual objects are opaquely rendered over the camera image, always occluding the real scene. This is often an incorrect representation of the actual spatial relationships in the combined real-virtual environment. This research tries to address occlusion handling in monoscopic augmented reality using software- and hardware-based approaches.
In my Master's thesis, I developed an algorithm for handling dynamic occlusion in front of known backgrounds. This approach combines a vision-based method for improving geometric registration with adaptive image comparison against a pre-defined background model. In the context of my medical augmented reality research (see below), I implemented occlusion handling using an existing anatomical model. Here, the main challenge was to find an application-specific mesh simplification for the anatomical model in order to make real-time processing possible. In 2007, a collaboration with Benjamin Huhle (Universität Tübingen) lead to the development of the first system which uses time-of-flight range data for occlusion handling in AR.

 Medical Augmented Reality

Medical diagnostics and therapy are among the main applications of augmented reality. During my Ph.D. thesis, I designed and realized the medical augmented reality system ARGUS. Unlike many other medical AR systems, ARGUS does not require specialized hardware for tasks like camera tracking and user interaction. Instead, most of the required data are obtained from existing, commercially available, and certified medical equipment.
Basic ARGUS operation uses tracking information from an infrared camera system for camera pose estimation. To this end, a specialized calibration step was developed. Later, tracking accuracy was improved with the help of a hybrid algorithm combining vision-based and infrared tracking. (Also see the associated journal article.) The system was later extended with an application-specific occlusion handling method and a user interaction system. As an application of the basic framework, a method for semi-automatic volume classification in AR was developed.