On targeting Markov segments

Abstract
Consider two user populations, of which one is targeted and the other is not. Users in the targeted population follow a Markov chain on a space of states. The un- targeted population follows another Markov chain, also defined on the same set of states. Each time a user ar- rives at a state, he/she is presented with information ap- propriate for the targeted population (an advertisement, or a recommendation) with some probability. Present- ing the advertisement incurs a cost. Notice that while the revenue grows in proportion to the flow of targeted users through the state, the cost grows in proportion to the total flow (targeted and untargeted) through the state. How can we compute the best advertisement pol- icy? The world-wide web is a natural setting for such a problem. Internet service providers have trail informa- tion for building such Markovian user models where states correspond to pages on the web. In this paper we study the simple problem above, as well as the vari- ants with multiple targetable segments. In some set- tings the policy need not be a static probability distri- bution on states. Instead, we can dynamically vary the policy based on the user's path through the states.

This publication has 3 references indexed in Scilit: