Distributed Strategic Learning with Application to Network Security
Abstract
We consider in this paper a class of two-player nonzero-sum stochastic games with incomplete information. We develop fully distributed reinforcement learning algorithms, which require for each player a minimal amount of information regarding the other player. At each time, each player can be in an active mode or in a sleep mode. If a player is in an active mode, she updates her strategy and estimates of unknown quantities using a specific pure or hybrid learning pattern. We use stochastic approximation techniques to show that, under appropriate conditions, the pure or hybrid learning schemes with random updates can be studied using their deterministic ordinary differential equation (ODE) counterparts. Convergence to state-independent equilibria is analyzed under specific payoff functions. Results are applied to a class of security games in which the attacker and the defender adopt different learning schemes and update their strategies at random times.