Distance Function For Matched Randomization

6 Oct

What is the right distance function for creating greater balance before we randomize? One way to do it is not to think too much about the distance function at all. For instance, this paper takes all the variables, treats them as the same (you can do normalization if you want to) and you can use Mahalanobis distance, or what have you. There are two decisions here: about the subspace and about the weights.

Surprisingly, these ad hoc choices don’t have serious pitfalls except that the balance we get finally in the Y_0 (which is the quantity of interest) may not be great. There is also one case where the method will fail. The point is best illustrated with a contrived example. Imagine there is just one observed X and let’s say it is pure noise. If we were to match on noise and then randomize, it will be the case that it will increase the imbalance in the Y_0 half the time and decrease it another half the time. In all, the benefit of spending a lot of energy on improving balance in an ad hoc space, which may or may not help the true objective function, is likely overstated.

If we have a baseline survey and baseline Ys and we assume that Y_lagged predicts Y_0, then the optimal strategy would be to match on lagged Y. If we have multiple time periods for which we have surveys, we can build a supervised learning model to predict Y in the next time period and match on the Y_hat. The same logic applies when we don’t have lagged_Y for all the rows. We can impute them with supervised learning.

This site uses Akismet to reduce spam. Learn how your comment data is processed.