A graph theoretic model for software systems is presented which permits a system to be characterized by its set of allowable execution sequences. It is shown how a system can be structured so that every execution sequence affected by a control fault is obviously in error, i.e., not in the allowable set defined by the system model. Faults are detected by monitoring the execution sequence of every transaction processed by the system and comparing its execution sequence to the set of allowable sequences. Algorithms are presented both for structuring a system so that all faults can be detected and for fault detection concurrent with system operation. Simulation results are presented which support the theoretical development of this paper.