Redirigiendo al acceso original de articulo en 19 segundos...
Inicio  /  Applied Sciences  /  Vol: 11 Par: 12 (2021)  /  Artículo
ARTÍCULO
TITULO

Approximate Image-Space Multidirectional Occlusion Shading Model for Direct Volume Rendering

Yun Jang and Seokyeon Kim    

Resumen

Understanding and perceiving three-dimensional scientific visualizations, such as volume rendering, benefit from visual cues produced by the shading models. The conventional approaches are local shading models since they are computationally inexpensive and straightforward to implement. However, the local shading models do not always provide proper visual cues since non-local information is not sufficiently taken into account for the shading. Global illumination models achieve better visual cues, but they are often computationally expensive. It has been shown that alternative illumination models, such as ambient occlusion, multidirectional shading, and shadows, provide decent perceptual cues. Although these models improve upon local shading models, they still require expensive preprocessing, extra GPU memory, and a high computational cost, which cause a lack of interactivity during the transfer function manipulations and light position changes. In this paper, we proposed an approximate image-space multidirectional occlusion shading model for the volume rendering. Our model was computationally less expensive compared to the global illumination models and did not require preprocessing. Moreover, interactive transfer function manipulations and light position changes were achievable. Our model simulated a wide range of shading behaviors, such as ambient occlusion and soft and hard shadows, and can be effortlessly applied to existing rendering systems such as direct volume rendering. We showed that the suggested model enhanced the visual cues with modest computational costs.

 Artículos similares