Abstract
In the area of scientific visualization, input data sets are often very large. In visualization of computational fluid dynamics (CFD) in particular, input data sets today can surpass 100 Gbytes, and are expected to scale with the ability of supercomputers to generate them. Some visualization tools already partition large data sets into segments, and load appropriate segments as they are needed. However, this does not remove the problem for two reasons: 1) there are data sets for which even the individual segments are too large for the largest graphics workstations, 2) many practitioners do not have access to workstations with the memory capacity required to load even a segment, especially since the state-of-the-art visualization tools tend to be developed by researchers with much more powerful machines. When the size of the data that must be accessed is larger than the size of memory, some form of virtual memory is simply required. This may be by segmentation, paging, or by paged segments. The authors demonstrate that complete reliance on operating system virtual memory for out-of-core visualization leads to egregious performance. They then describe a paged segment system that they have implemented, and explore the principles of memory management that can be employed by the application for out-of-core visualization. They show that application control over some of these can significantly improve performance. They show that sparse traversal can be exploited by loading only those data actually required.

This publication has 12 references indexed in Scilit: