Pricing derivatives on graphics processing units using Monte Carlo simulation
Abstract
This paper is about using the existing Monte Carlo approach for pricing European and American contracts on a state-of-the-art graphics processing unit (GPU) architecture. First, we adapt on a cluster of GPUs two different suitable paradigms of parallelizing random number generators, which were developed for CPU clusters. Because in financial applications, we request results within seconds of simulation, the sufficiently large computations should be implemented on a cluster of machines. Thus, we make the European contract comparison between CPUs and GPUs using from one up to 16 nodes of a CPU/GPU cluster. We show that using GPUs for European contracts reduces the execution time by ∼ 40 and diminishes the energy consumed by ∼ 50 during the simulation. In the second set of experiments, we investigate the benefits of using GPUs' parallelization for pricing American options that require solving an optimal stopping problem and which we implement using the Longstaff and Schwartz regression method. The speedup result obtained for American options varies between two and 10 according to the number of generated paths, the dimensions, and the time discretization.