One of the key challenges Findi aims to address is the difficulty of finding someone “good” in the financial planning space. In particular, when choosing an investment advisor, it can be very difficult to separate out luck from skill. Common cognitive biases only exacerbate this difficulty, and we’ll walk through the specifics of 3 key ones below. We won’t address the nuanced question of whether or not advisors who can consistently outperform the market exist, but instead focus on how even random chance can look like skill under the right circumstances.
Specifically, over the next few posts we’ll consider a hypothetical investment firm, MTD (Monkeys Throwing Darts) Partners, designed to achieve random results. We’ll walk through these results in detail and illustrate how cognitive biases may lead us to see a pattern of successful management, even though we’ve explicitly designed things to be completely random.

Our (Fictional) Firm
We operate our firm following a unique investment methodology. We’ve hired 1,000 of the brightest monkeys in the industry (all have research experience with prestigious universities), and equipped them with darts to throw at a list of S&P 500 stocks. MTD Partners builds unique portfolios based on what our simian advisors select and regularly evaluates their performance. In the below, we’ll walk you through a portion of our investment prospectus, examining performance over the last 10 years.
We wanted to create a performance-driven culture at MTD Partners, and ensure our best employees rose to the top. To encourage this, we’ve decided to implement performance reviews where we let the bottom 20% of performers go. To make our lives simple (and avoid crushing morale), we only go through this process twice over the course of the 10 years. This means we fire 36% of the workforce over a decade or ~4% per year, an extremely low turnover rate compared to many firms.
At the start of the 10 year period, we’ll give each advisor $10,000 to invest and watch what happens. To run the hypothetical simulation, we need to make a number of assumptions (and if you don’t care about the math, just skip to the section below). In particular, we’ll assume portfolio returns are normally distributed about the mean of the S&P 500 over the past 10 years. We’ll be drawing from a distribution with a mean of 15% and a variance of 12%, so there will be a lot of noise in the results. This time period likely reflects an unusually bullish streak, but since we only started the firm 10 years ago it’s all the data we have.

Though it’s outside the scope of this post to explain in detail, the technique we’ll use to simulate the performance is Monte Carlo simulation and part of a well-established methodology (with well-established limitations). We’ve run it here using a simple python script and would be happy to share the code upon request.
Anchoring bias
On average, we expect our portfolios to return the mean of the S&P 500. If the S&P averaged 15% over the last 10 years, that means on average, you should expect to roughly double your money over the course of 4-5 years with one of our portfolios. Because our advisors choose randomly from the S&P 500, the performance of any individual portfolio may deviate from this average.
In particular, let’s consider an advisor who underperforms slightly, achieving returns of 13%. If we invested our $10,000 with him at the start of the period, we’d have $33,946 at the end of 10 years. We made $23,946 on our $10,000, or a return of nearly 240%, which looks pretty good at first brush.
However, this view of the world is skewed. We’re implicitly comparing our advisor’s performance with how money typically behaves in our wallets, which is to say we’re comparing to money that doesn’t grow at all. Instead, when evaluating performance we should use some sort of representative benchmark. In this case, the obvious choice is to benchmark performance against the overall S&P 500. When we compare against a portfolio growing at the average of 15%, things don’t look so good. If we’d invested across the entire S&P 500 (e.g., by buying an S&P 500 index fund), we’d have ended up with $40,105 meaning we actually lost out on $6,159 by going with this advisor.
Even though our advisor’s returns mean we end up with 20% less than if we’d invested in the market overall, if we compare against simply not investing the money, we may end up thinking our portfolio did pretty well.
The non-intuitive nature of exponential growth only compounds this bias. Unless an advisor is consistently losing money, if we wait long enough literally any return will look big in absolute terms. Most of us don’t have many investments we’ve held for decades and thus don’t have great comparison points. You see this all the time when people tell you about how their grandparents bought a house in 1941 for $20,000 and now it’s worth $440,000. Yes, compound interest is amazing, but that’s only a 4% return on their investment. If they’d taken the downpayment money and put it in the S&P 500, they’d now be sitting on more than $2,000,000.
Even considering the impact of anchoring alone, you can start to see how cognitive biases impact our evaluation of investment performance. However, this is just the tip of the iceberg. In the next few posts, we’ll examine how other biases make evaluating performance even trickier.
Stellar primer on this bias. If you haven’t read NOISE by Kahneman, Sibony, and Sunstein yet (though it seems like you may have), I definitely recommend it.