James A. Brofos quantitative strategies researcher

The Eigenfunctions of the Brownian Motion Covariance Kernel

We verify the formula for the eigenfunctions of the Brownian motion covariance kernel, one of the few examples of a Gaussian process whose eigenfunctions can be identified in closed-form. Read more.

Expected Number of Iterations to Zero

We compute the expected number of iterations for an inductively-defined stochastic process to reach zero. Read more.

Bernoulli Expectation Maximization

In this post, I compute the expectation and maximization steps for a Bernoulli distribution with two unknown success probabilities. Read more.

The Fourier Transform (II)

This article continues our analysis of the Fourier transform by defining the Fourier transform of measures instead of functions. Read more.

The Fourier Transform (I)

This article investigates the Fourier transform and derives some basic properties such as the fact that the Fourier transform of a Gaussian function is itself Gaussian and the equation for the Fourier transform of the rect function. Read more.

Expectation Maximization as Coordinate Ascent

I read a very useful description of the EM algorithm as coordinate ascent and wanted to rederive some of the ideas. Read more.

Bayesian Optimization for the Design of Hypothesis Tests

Bayesian optimization is an exciting idea in machine learning that has recently achieved state-of-the-art performance by modeling a performance metric (e.g. accuracy or mean squared error) as a function of model hyperparameters. In this post, I speculate about how this technique may be employed in the design of frequentist Hypothesis tests. Read more.

Stein Gaussian Mixture Models

Yesterday I was reading two very interesting papers, which basically express the same idea. In particular, that we can iteratively refine a variational posterior approximation by taking a Gaussian mixture distribution and appending components to it, at each stage reducing the KL-divergence between the true posterior and the mixture approximation. The two papers are Variational Boosting Iteratively Refining Posterior Approximations and Boosting Variation Inference (you can see they almost have exactly the same name!). In this post, I want to try to connect at least one of the insights here to my research on Stein variational gradient descent. Read more.

Verifying a Stein's Identity Theorem

Read more.

Quarterly Rebalancing of ETFs in Odin

The good folks over at QuantStart wrote an excellent article detailing a strategy that periodically rebalances a portfolio according to some weighting scheme that is specified *a priori*. In this article, I want to demonstrate how a similar strategy can be implemented in Odin. Read more.

Introduction to Odin

In this article, I want to introduce the Odin algorithmic trading and backtesting platform. I will exhibit how Odin can be leveraged to backtest a very simple buy-and-hold strategy for the Spyder ETF (SPY). Read more.

Introduction to Stein Variational Gradient Descent - Part IV

In this post we'll be seeking to understand some of the principals underlying SteinGAN, a method for reducing sampling complexity by training a model (often called the generator) that leverages Stein variational gradient descent. Specifically, let us denote by $\xi$ a random variable drawn from a prior noise distribution; then the objective of SteinGAN is to produce a model $G\left(\xi; \nu\right)$ such that the output of $G$ is distributed according to a target distribution $p\left(x;\theta\right)$. In this setup, both $\nu$, the parameters of the generative model, and $\theta$, the parameters of the target distribution, are both unknown and need to be estimated. Read more.

A Likelihood Identity for Exponential Families

I wanted to do a quick post to verify an identity regarding exponential families that I encountered while reading about SteinGANs. Read more.

Introduction to Stein Variational Gradient Descent - Part III

In our last discussion, we focused on the kernelized Stein discrepancy and how variational Stein gradient descent can be leveraged to sample from posterior distributions of Bayesian models. Our application area in that case was specifically Bayesian linear regression, where a prior (with a fixed precision) was placed over the linear coefficients of the linear model. Unfortunately, Bayesian linear regression is somewhat uninteresting because it is possible to compute the closed-form posterior for the coefficients. In this post, we'll demonstrate two statistical models where this is not the case. Read more.

Sharpe Ratio of a Simple Game

A common metric in the field of quantitative finance is the Sharpe ratio. The Sharpe ratio is a measure of the extent of the returns (over the risk-free rate) that a strategy will produce relative to the amount of risk it assumes. Here we examine the Sharpe ratio of a simple game. Read more.

Introduction to Stein Variational Gradient Descent - Part II

In this post, we'll be taking a closer look at the *kernelized* Stein discrepancy that provides the real utility underlying Stein variational gradient descent. Read more.

Introduction to Stein Variational Gradient Descent - Part I

Stein variational gradient descent is a technique developed by the Dartmouth machine learning group. The essential idea to perturb samples from a simple distribution until they approximate draws from a target distribution. Read more.