Abstract
The effect of choosing a very small sampling interval for the identification of continuous-time systems for input-output data samples is illustrated and a method to obtain the optimum sampling interval is proposed. This method allows one to start with an arbitrarily small sampling interval without computational problems and at length converge to the proper sampling interval. Several simulated examples show the usefulness of the method even if the measurements are contaminated with a considerable amount of noise.

This publication has 0 references indexed in Scilit: