Comparison of multiscale representations for a linking-based image segmentation model

Abstract
Different multiscale generators are qualitatively compared with respect to their performance within a multiscale linking model for image segmentation. The linking model used is the hyperstack that was inspired by linear scale space theory. The authors discuss which properties of this paradigm are essential to determine which multiscale representations are suited as input to the hyperstack. If selected, one of the main problems the authors tackle is the estimation of the local scale such that the various stacks of images can effectively be compared. For nonlinear multiscale representations, which cart be written as modified diffusion equations, an upper bound can be achieved by synchronizing the evolution parameter. The synchronization is empirically verified by counting the number of elliptic patches at corresponding scales. The authors compare the resulting stacks of images and the segmentation on a test image and a coronal MR brain image.

This publication has 25 references indexed in Scilit: