Advantage/Disadvantage vs. Direct Modifiers in Dungeons & Dragons

This year, after a hiatus that lasted a couple decades, I started playing Dungeons & Dragons again. I’m a bit late to the D&D renaissance – it has broken into the mainstream so thoroughly that it has appeared in a number of popular (read: target audience != nerds) TV shows. D&D is popular enough that I assume anyone reading this is familiar with the basics: players take on the role of heroes and collaboratively tell a story, using dice rolls to determine the success of their attempted actions.

The most common roll in D&D is using the result of a 20 sided die and comparing it to some pre-determined threshold value set by the dungeon-master (a sort of narrator/referee for the game). Your character’s chance of success isn’t left to being a coin flip: if they’re attempting something they’re good at, like a keen eyed elven archer shooting her bow, you’ll get to add a number to the die roll before comparing it to the target value. Likewise, if they’re attempting something they’re bad at, like a dim-witted orc trying to see through an illusion, you’ll have to subtract a number from the die roll. These added/subtracted numbers, called modifiers, have been used since the first version of D&D, played some 40 odd years ago. The most recent edition of D&D (5th edition, released in 2014) still uses modifiers, but it has also added a new twist: advantage and disadvantage.

Previously, everything was handled with modifiers: both the inherent abilities of your character and the circumstances of a particular moment. For example, the elven archer might get a +6 modifier on any attack made with her bow, and if she was attacking an unsuspecting victim who hadn’t noticed her yet, she might get an additional +4 modifier. Depending on the circumstances of a particular action, many different modifiers could apply, and you would add them all up to find the final modifier to use. In 5th edition, there are still modifiers, but they primarily apply to the inherent abilities of the hero. The circumstances of the particular action use a new system called advantage and disadvantage. Most checks will be made without advantage or disadvantage, and you simply roll the 20 sided die and add your inherent modifier. If the circumstances are favorable to your character’s success (e.g., the aforementioned bow-shooting while not being noticed), you can roll with advantage, which means you get to roll two 20 sided dice and take the higher value. If the circumstances are unfavorable, you roll with disadvantage, meaning that you roll two 20 sided dice and take the lower value.

The advantage system is more elegant, as you no longer need to determine a numerical modifier for each situation, you just decide if a situation calls for advantage, disadvantage, or neither. However, it’s also less flexible, as it can’t accommodate any subtlety between cases where advantage does apply. With positive modifiers, you can give +1, +2, +3, and beyond. With advantage, you either get advantage on the roll or you don’t.

When I first learned about this system, advantage seemed incredibly powerful to me, and like something that should be used sparingly. Getting to roll twice and choosing the higher value intuitively feels like you should almost always succeed! But as we’ll get to in the real meat of this post, that is not necessarily the case. Since this is ultimately all about probability, we can convert between advantage and an “effective modifier”, to see how much likelier advantage makes us to succeed on a roll.

The target value you are trying to beat (or match) with your roll is called a difficulty class, or DC. Without modifiers or advantage/disadvantage, it’s simple to calculate your chance of success. There are 20-DC sides that would beat the DC, and one side that would match it. A fair 20 sided die has an equal chance of landing on any of its 20 sides, so your chance of success is given by:

\text{prob. success}=\frac{20-DC}{20}+ \frac{1}{20}= \frac{21-DC}{20}

If we add in modifiers, it doesn’t complicate things much. A modifier of +3 means that there are three additional sides we can roll on that die that will lead to success, while a modifier of -2 means there are two fewer sides. So, adding this into our equation, we get:

\textrm{prob. success}=\frac{21-DC+\textrm{mod}}{20}

We can see that changing the modifier by 1 changes the probability of success by 1/20, or 5%. This corresponds to the 20 sided die having a 5% chance of landing on any given side, and changing the modifier by 1 leading to one additional (or fewer) side of the die leading to success.

This makes it very easy to see how changing a modifier affects probability. Assuming that the DC is in the range where it will be possible for us to succeed or fail (i.e., it’s not extremely low like -4 or extremely high like 37), a +2 modifier will always improve our probability of success by 10%, and a -5 modifier will always decrease the probability of success by 25%.

To see how advantage and disadvantage affects our probability of success, it is helpful to define a more convenient version of DC. Instead, we’ll use an “effective DC”, which we calculate as EDC = DC – 1 – mod. This allows us to rewrite the equation above in a cleaner way:

\textrm{prob. success}=\frac{20-EDC}{20} = 1 - \frac{EDC}{20}

And we can also calculate our chance of failure:

\textrm{prob. failure}=\frac{EDC}{20}

To calculate probabilities of rolls made with advantage or disadvantage, you need to understand the probabilities of independent events. Basically, the result of one roll doesn’t affect the result of the other roll – the two rolls can be treated as independent occurrences. When we roll with advantage, we get to choose the higher number, so to fail when rolling with advantage, it’s like we would need to fail twice in a row. The probability of failing twice in a row is just the probability of failing once times the probability of failing once:

\textrm{prob. failure w/ adv.}=\left(\frac{EDC}{20}\right)^2

and thus the probability of success is just 1 minus the result above:

\textrm{prob. success w/ adv.}=1-\left(\frac{EDC}{20}\right)^2

With advantage, we’re squaring the fraction that we subtract from 1, so clearly we have a greater chance of success, but it’s not as simple as the case with modifiers, where we could say that changing the modifier by 1 changes the chance of success by 5%. There’s no fixed change with advantage, it depends on your original chance of success.

(Note: you can do a similar calculation for disadvantage, but in that case the chance of success w/ disadvantage is chance of success squared and chance of failure with disadvantage is 1 – chance of success squared. For the rest of this post I’ll only work through examples with advantage, but the same principles apply to disadvantage.)

We can see the varying benefit of advantage in practice by looking at some sample EDCs. Let’s first consider the very difficult EDC of 18 (meaning you’d need to roll a 19 or 20 to succeed). Without advantage, the probability of success is 10%:

\textrm{prob. success}=1 - \frac{EDC}{20} = 1-18/20 = 0.1

With advantage, the probability of success is 19%:

\textrm{prob. success w/ adv.}=1-\left(\frac{EDC}{20}\right)^2 = 1 - 0.9^2 = 1 - 0.81 = 0.19

Thus, advantage improved our chance of success by 9%, which corresponds roughly to a modifier of +2 (which would give us a bonus of 10%).

Next, let’s look at the case of an easier EDC of 8 (meaning you’d need to roll 9 or higher to succeed). Without advantage, the probability of success is 60%. With advantage, the probability of success is 84%. Thus, advantage increased our odds of success by 24%, corresponding roughly to a modifier of +5.

As it turns out, the increase in probability of success is greatest for moderate EDC values. With a very high DC, you are still likely to fail even with advantage, and with a low DC you are likely to succeed even without advantage, so the addition of advantage doesn’t change the probabilities much. But if you have a roughly 50% chance of success, adding an extra attempt is the most valuable. You can see the equivalent modifier when you have advantage based on EDC in the plot below:

From this we can see that advantage can be equivalent to a +5 modifier, which is quite strong, but that advantage is capped, and it’s less powerful than my intuition originally suggested. So while I definitely could have had plenty of fun playing D&D without having thought through this math, it has let me grant advantage (or impose disadvantage) on rolls without being worried that it’s “overpowered”.

The Marginal Cost of Driving a Mile

For car owning Americans, vehicle expenses will almost always take up a significant portion of the monthly budget. Cars cost less than housing, and might cost less than food, healthcare, or childcare depending on specific circumstances, but they cost more than pretty much everything else. Since the great majority of American households (~90%) own cars, it is thus natural that cars are going to be a big topic when it comes to personal finance advice. Some examples of great personal finance advice that has to do with cars:

But most of the advice I’ve found deals with either which car you’re buying (e.g., luxury vs. standard, new vs. used) or whether you should own a car at all. Often the discussion surrounds the per-mile cost of driving and uses that to compare cars to other modes of transport. The IRS even provides an official value for how much you can deduct: $0.58 per mile driven. While this might be a decent estimate for the overall cost of car ownership, it’s not as useful for making day to day decisions of whether to drive someplace or bike/take public transportation.

For the day-to-day decisions, the marginal cost of driving a mile is a much more useful metric. That is: how much more expensive it is to own and operate the car for each additional mile driven. The idea here is that the costs of a car can be separated into a fixed portion (that you pay no matter how much you drive) and a variable portion (that increases proportionally to how much you drive the car). I’m in a one car household: I get by day to day without a car, but my wife has a car that she needs in order to commute. When we go into the city on the weekend, we have to decide whether to drive or take public transportation. For this case, it’s not really fair to include the fixed portion of the car cost in our decision, because we’re going to have pay that whether we drive on the weekends or not.

I think that a simple (but reasonable) treatment would be to say that the car’s principal cost (or depreciation, if you prefer to think about it that way) and insurance costs are fixed, and that the gas and maintenance costs are proportional to distance driven. Here are my justifications:

  • The year the car was made is more important to resale value than the number of miles on the car
  • Unless you use per-mile insurance (like MetroMile), how much you drive has relatively little influence on your bill
  • I don’t think anyone will argue with a model saying that gas burned is proportional to distance driven
  • While you are supposed to have maintenance check ups at regular time intervals even if you haven’t hit the next miles driven checkpoint, the costs of the actual repairs should scale more with distance driven

So regarding the costs that constitute the marginal cost of driving: gas expenses are simple enough to calculate. Your per mile cost is simply the cost of gas divided by the fuel efficiency of your car. E.g., if you drove a car that got 28 mpg, and gas costs $3.19/gallon where you live, then the gas cost per mile would be:

\frac{\$3.19\textrm{/gallon}}{28\textrm{ mile/gallon}} = \sim\$0.11\textrm{/gallon}

The exact gas cost to drive a mile will vary depending on how efficient your car is and how expensive gas is in your region, but for most places in the US, it should be in the ballpark of $0.10/mile. Maintenance isn’t as simple to calculate, and will probably vary over the life of your car, but most estimates put it around $0.10/mile for a standard commuter car.

Using both these numbers, the marginal cost of driving should be around $0.20/mile, rather than the $0.58 figure from the IRS and other numbers higher than $0.30/mile that are typically cited when talking about the overall cost of driving. Using this $0.20/mile figure, it’s definitely cheaper for us to drive in to Boston for a weekend trip – it’s about 20 miles round trip, or $4, which is less than a round trip fare for one person on the subway.

For our use case, if we’re only focusing on the cost of subway fare vs. the marginal cost of driving, we should almost always drive. But that’s not necessarily the only thing to consider. Other factors that could change the math include:

  • The effect on the environment
  • The parking situation
  • The difference in time
  • The difference in comfort
  • Getting some exercise

Going through each of these bullets, in order: one obvious difference between driving and public transportation is that the latter is much better for the environment. If that’s something you care about (and it is for me), it’s probably worth factoring it into the math as well. There are different ways you could calculate the value, but I think adding 5-10 cents/mile to the cost of driving is a reasonable way to quantify this factor. It doesn’t drastically change the math, but it does push us closer to favoring the subway (especially if you were driving one person and not two).

Depending on where in the city we’re going, a huge factor is parking. There are some trips we make where there is ample, free parking, and that doesn’t add any cost/inconvenience, but there are some places in the city where free parking is almost impossible to find, and lots might cost around $25. This is obviously a huge factor, and if we’re going somewhere with expensive lots, we’re much more likely to take the subway. If you’re paying for parking, that cost can be factored in directly. If you’re spending a significant amount of extra time looking for parking, that brings us to the next point.

In practice, the main reason people drive over taking public transportation is probably that it’s faster, and (when it’s true) that’s a good reason. The less time we spend traveling, the more time we can spend doing things that actually matter. A decent starting point would be to value your time at your hourly wage, but there are fair arguments to be made to push that up or down. In any case, driving to Boston on the weekends usually saves us about an hour round trip compared to taking the subway, so that’s a huge factor in favor of driving. For normal commuting hours, driving to Boston might actually take longer, and give an edge towards the subway, so it definitely depends on the specific situation.

It’s also worth factoring in the difference in comfort between the options. If I’m traveling with my wife, the car is slightly nicer: we’ll probably chat and listen to music, whereas when we take the subway together we’re a bit more likely to sit in silence. If I’m traveling alone, the subway is much nicer: I can relax and listen to a podcast, compared to focusing on driving (and probably getting annoyed by other Boston drivers) when I’m in the car. On this point I’d probably just add a mostly arbitrary bonus to the one I prefer: maybe adding a $5/hour discount to driving when I’m with my wife, and adding a $20/hour discount to taking the subway when I’m by myself.

It could also be worthwhile to consider the difference in exercise between options. In the example I’ve been using (driving vs. taking public transportation) it doesn’t matter much, since the only potential exercise would be a short walk to and from the subway stations, but if you’re deciding between driving and biking this will matter more. If you’re someone who would benefit from more exercise (which basically all of us are), then time spent exercising should be valuable to you, and time spent driving should not. A decent starting point would probably be to value time spent exercising at your hourly wage, but again, there are reasonable arguments to be made to shift this up or down.

In general, I try to avoid driving, and I default to thinking it’s the wrong option, but going through this exercise and looking over this list of points does give me some confidence that it’s not the worst decision in the world when we drive into Boston on the weekends. As long as we aren’t paying $25+ for parking, it seems like paying for the marginal cost of driving those extra miles is worth it.

Variations on Capture The Flag

Even though the arrival of summer no longer corresponds to a long break from school/work for me, it still reminds me of the weeks spent at summer camp when I was growing up. In my elementary school years, one of my favorite games to play at camp was capture the flag (CTF). There’s something deeply compelling about the large scale of the game, and the teamwork and coordination required to win. After playing for many summers, however, I started to realize that there are some big problems with the mechanics of the “classic” version of CTF. Perhaps it was from playing more video/board games and looking at the summer camp staple from a game design perspective, but at some point I became convinced that there should be many ways to improve on the classic version of capture the flag.

“Classic” Capture the Flag

I imagine most people reading this are familiar with capture the flag, but there are enough variations that it is still worth defining what I consider the classic version and the rules for the majority of the games I played in. The game is played on a large open field divided in half, with each of the two teams taking one half of the field as their “home” side. Each team has a flag on their side that the other team is trying to retrieve and bring back to their own side. Successfully retrieving the flag earns your team a point (or wins the game outright, if you’re not playing for a fixed time). If you are tagged by an opposing player while on their side of the field, you go to a “jail” on the opponents’ side of the field. Players in jail are freed if a non-jailed player from their team tags the jailed players. The players in jail can form a chain by holding hands in order to stretch further from the jail spot to make it easier for teammates to free them. There is a “safe zone” around the flag spot, so that if you reach the flag, you can take a breather without being tagged before trying to run the flag back to your own side.

The fundamental issue I have with this standard version of capture the flag is that the optimal strategy is very defensive, and results in slow, war of attrition type game play. In any game of CTF, you need to distribute your resources (players) between offense (trying to capture the opposing flag or free your jailed players) and defense (protecting your flag and keeping opposing jailed players from being freed). Sending players to try to capture the flag is risky: either you succeed and win, or you fail and some of your players are jailed. If you have a lot of fast players, then you’re likely to succeed, but for balanced teams, the chance of success of any given attempt is pretty low. Thus, sending players on offense at the beginning of a round is generally a bad strategy. It is better to play defensively until you’ve jailed enough of the opposing players that their defenses are stretched thin and you have a higher chance of capturing their flag. Unfortunately, if both teams adopt this strategy, then CTF becomes a game of sitting around and waiting more than anything else, which is no fun for either team.

I haven’t played capture the flag since undergrad, so maybe my theory-crafting about it so many years later misses the mark of what it’s actually like to play, but in any case, all these years have given me the opportunity to come up with many variations on the classic game that address what I view as the fundamental flaw of the game.

Variations on Jail

Jail is probably the most problematic aspect of CTF, as it’s basically player elimination, one of the most infamous game design mechanics out there. Perhaps as a consequence of this, there are plenty of variations on the typical jail rules, and jail is the main aspect of the game where I’ve seen different rules actually implemented at summer camp. All the variations below are ways of making it easier to get out of jail, which discourages strategies that depend on keeping lots of opposing players in jail.

A simple variation, and one I considered adding as part of the classic rules since I think it is fairly common, is that jailed players who manage to tag an opposing player free their team. This discourages “jail guards” from staying too close to the jail, and gives jailed players something more to do, as when they are linked they can coil up and stretch out to try to catch opposing players off guard and tag them.

Another variation is to give jailed players an alternate task in order to release themselves. It could be something physical (do X pushups, do Y jumping jacks) or it could be something mental (solve a Rubik’s cube, solve a Sudoku puzzle). Assuming the players are capable of completing the assigned task, this puts a time limit on how long players will stay jailed, and give them something to do in the meantime.

An even simpler way of assuring that players don’t stay in jail for too long is to have regular “jailbreaks”, when all players are released from both jails. Short intervals (releasing players every 2 minutes) ensure that the game stays fast paced, as players will never spend too long in jail, while longer intervals (releasing players every 10 minutes) doesn’t change the game drastically, but guarantees that players won’t be in jail all afternoon.

The most dramatic change to the rules would be to get rid of jail altogether. The point of jail is to be a negative consequence for being tagged, but you don’t necessarily need a jail to achieve this. An example of an alternative is that rather than tagged players being jailed, tagged players must walk back to their own side (or maybe their own flag safe zone) with their hands on their head until they’re allowed to resume play. Getting rid of jail essentially eliminates downtime, so everyone gets to play for the entirety of the game, and there is little disincentive from sending players to try to capture the opposing flag.

Variations on Field of Play

While it’s not always an easy change to implement, one option that can help push the balance of play towards offense rather than defense is changing where the game is actually played. Rather than an open field, where everyone can see what’s going on and quickly respond to defend when their flag is being threatened, CTF can also be played in a forest or on a campus with buildings between the flags. Obstacles like trees and buildings give players something to hide behind, so there are opportunities to steal a flag through distraction and stealth, rather than just by running faster than the other players.

Another option, if you’re limited to playing on an open field, is to make the sides more complex than just a field cut in half (although you’d probably need a lot of cones/rope to mark the sides in this case). On the classic field, teams only need to worry about opponents coming from a single direction. But if the shape of the two home sides were interdigitated “L”s or “U”s, for example, then flag guards would need to worry about players coming from two or three directions, making the flag harder to guard and making capture attempts more likely to succeed.

Variations on the Flag

Another option for variation is to change the rules around the flag itself. In the classic version of the game, the flag can be handed off, but it can’t be thrown – if not due to the rules, then simply because flags are generally difficult to throw. If the flag is replaced by a ball or a frisbee, then allowing the flag to be thrown between teammates (as long as it doesn’t touch the ground) opens up new offensive strategies. In order to keep it possible to defend against the thrown flag, it’s probably prudent to disallow throws to or from safe zones. For example, you would need to step outside the safe zone around the flag to throw, and you couldn’t throw it to a teammate on your side of the field’s dividing line.

Another change that would open up the field of play would be to have multiple flags on each side. This would naturally make it harder for a team to defend all of its flags effectively, and you could additionally assign different point values to the different flags, which would add a layer of strategy to the game. For example, a flag that is close to the dividing line might be worth just 1 point, while a flag that is deep on the opponent’s side and doesn’t include a safe zone around it might be worth 5 points.

Variations on the Tag Method

Classic capture the flag is played with one hand touch, which means you just need to touch an opposing player with one hand (or really even just one finger) in order to get them out. A simple change to make it harder to defend and easier to capture the flag would be to use two hand touch, which requires you to tag an opposing player with both your hands simultaneously in order to get them out. In practice however, this variation might not work well, as I suspect it would lead to more arguments about whether or not someone was really tagged out (and classic CTF already has enough of those arguments).

Another variant is to use waist flags (like those used for flag football) that must be pulled in order to get a player out. In theory, this should make it more clear cut about whether a player was tagged out or not, but with the added possibilities of a player blocking their own flags with their hands or a flag falling out on its own, it’s unlikely that this method would eliminate accusations of cheating.

Perception vs. Statistics on the Hearthstone Ranked Ladder

When Hearthstone, Blizzard’s take on a collectible card game, first came out, I was hooked. It tapped into my nostalgia for all the Magic: The Gathering I played when I was younger, but offered a much smoother playing experience than what was available for other online “card” games at the time. I played for a few years, but eventually grew bored of Hearthstone. Now I’m back on the online CCG bandwagon with the release of Magic: The Gathering Arena, the new Hearthstone-ified version of the original TCG/CCG.

Playing an online CCG again reminded me of one thing that never really made sense to me about Hearthstone: the way its ranked ladder worked. Most competitive online games use some variant of the Elo rating system. In such a system, each player has a number which corresponds to their skill level – whenever they win a game, that number goes up, and whenever they lose a game, that number goes down. The magnitude in the change of Elo rating is equal for the two players involved in the game (e.g., if I lose to you and consequently lose 5 points, that means you would gain 5 points) and depends on the difference in rating between the players. For example, if a player beats a much lower rated player, they will only get a small increase in rating, but if they lose to that much lower rated player, they will experience a large decrease in rating. Typically this Elo rating is then translated into a rank/tier/division, which broadly groups players by skill level. For example, in Rocket League, the lowest ranked players are Bronze, followed by (in ascending order): Silver, Gold, Platinum, Diamond, Champion, and Grand Champion.

Hearthstone, Magic Arena, and most online “card” based games don’t seem to follow this method, at least not until you get to the very top tier of the ladder (e.g., “Legendary” for Hearthstone, or “Mythic” for Magic). For the portion of the ladder relevant to most players, you start at (or near) the lowest rank at the beginning of every month, a win gets you one “point” (the exact terminology varies game to game), a loss loses you one point, and there are regular point cutoffs associated with each rank (e.g., each subsequent rank requires 5 more points than the last one). There are some specific caveats to each game, but that general system forms the foundation of the ranking system.

At the surface, it might seem like there’s not really a difference between these two systems. Both systems are zero-sum: any points lost by one player are gained by another. And assuming there are enough players so that you’re always playing someone of roughly your same skill level, then the importance of each game is about the same. What is different is the shape of the distribution of player skill ratings, which matters because in the card game system, this distribution is fundamentally connected to player ranks.

In the Elo system, the distribution tends to resemble a Bell curve, since each game should have roughly 50% chance of going either way, like a Galton Board. However, this distribution isn’t particularly important, since the developers can set arbitrary cutoff points for each rank. Given that the Gold and Platinum ranks are in the middle of the possible Rocket League ranks, players expect that an average player would be Gold or Platinum, and the Rocket League developers can set the ratings corresponding to Gold and Platinum to be in the middle of the distribution. If they found that players were happier when they were classified as a higher rank, they could arbitrarily shift the ratings so that Diamond corresponded to the middle of the distribution, without changing how the Elo rating system works overall.

The card game system would also resemble a Bell curve if it was truly zero-sum, but at the bottom of the ladder it is not actually zero-sum. Someone with zero points can’t lose points, so games at the bottom of the ladder result in a net positive production of points. By preventing players from going below zero points, the ranking distribution looks asymmetric and long-tailed, like a Pareto distribution, or a Chi-squared distribution with only one or two degrees of freedom. (Since this can be modeled as a diffusion process, I think of it as being like the temperature distribution in the heating of a semi-infinite solid.) The shape of this distribution wouldn’t matter if the developers set ranking cutoffs arbitrarily, but by keeping the point cutoffs regular or mostly regular, this long tailed distribution in points leads to a long tailed distribution in player ranks as well.

Since I’m sure most players don’t spend as much time thinking about distributions as I do, the card game ranking system becomes problematic because the perception of players’ rankings greatly differs from the reality of the ranking distribution. Back when I played Hearthstone, the effective lowest rank (corresponding to zero points in my above description) was 20, and the highest rank was 1. Consequently, a player might reasonably assume that an average skill level would get them to rank 10 or 11, when in fact, due to the skewed distribution, the median player might be at only rank 17 or 18.

So if you’re committed to this “card” ranking system, with its skewed distribution, what can you do? There are some changes that were built into the original Hearthstone system, like not having completely regular point cutoffs. For Hearthstone, the lowest ranks require 3 points to advance, while the highest ranks require 5 points. This helps spread out the lower ranks a bit more, but is not aggressive enough to really make the ranks match player expectations, and doesn’t address the fundamental problem that points are only created at the bottom of the ladder, and must funnel all the way up the ladder for more players to reach the highest ranks.

The big change I’ve noticed in Magic Arena is that there are way more instances of net positive point distribution. Hearthstone originally only had two sources of net point increase: a player losing at the bottom of the ladder when they couldn’t actually lose a point (but their opponent still gained one) and “win streaks,” where 3+ wins in a row would lead to more than one point per win. (Technically a ladder player beating a Legendary player also resulted in a net gain in points, but this would probably cancelled out by ladder players losing to Legendary players.) In Magic Arena, they added multiple “checkpoints” throughout the ladder, so that once you achieve a certain rank you can’t fall below it. These artificial floors add more net points to the system, since losing at the bottom of the ladder OR any checkpoint will result in a net increase in points (note: I believe Hearthstone implemented a similar change after I stopped playing). The other change is that in lower ranks, every single game is net positive: the winner gets two points and the loser only loses one. Between these changes, I suspect that the ranking distribution for Magic Arena players is much closer to player perception than it was for the original Hearthstone ladder.

All this discussion of how to specifically craft this card game ranked ladder system so that it matches player perceptions/expectations begs the question: why bother? Why not just adopt the Elo rating system used in other games? If you asked the developers of these games, I imagine that they would make an argument about transparency, since it’s much clearer what a player needs to do to advance: “win X more games than I lose”, compared to “gain X Elo rating points (which are often hidden from the player)” in other games. But I believe the real reason is to drive participation. By starting players at or near the bottom of the ladder every month, it forces them to play a decent number of games every month in order to achieve the rank they’re striving for. Free to play games (which basically all of these cards games are) need high participation rates to survive, and these ladder systems allow playing to always be associated with advancement (if perhaps in a more Sisyphean way than the developers would admit).

Wealth is More Powerful than Income

In the last post, I discussed how wealth and income are mostly interchangeable, as debt allows one to convert income into wealth, and investing allows one to convert wealth into income. I noted one primary advantage of wealth (at least in the United States): income from invested wealth is taxed at about half the rate of income from labor. For answering hypotheticals like “would you rather have a guaranteed job that earns $250k/year or a lump sum of $3 million?” the difference in tax rates might be one of the most important factors to consider. But from a more pragmatic “how should I make financial decisions in my day to day life?” perspective, there is another, more important reason why wealth is more powerful than income.

The main reason that wealth is more powerful than income is that it is more robust. It’s relatively easy to lose your source of income: for many people the majority of their income comes from one job, and losing that job means you’ve suddenly lost all your income. There are many reasons you might lose your job, and they’re often out of your control. You may be able to find another job quickly and recover your income, but it could also take months or years to find a new job, and/or you might need to take a pay cut. So income is far from guaranteed over the course of your working career.

Wealth, assuming it is invested in a well diversified portfolio, is much more stable since it doesn’t depend on a single industry or company like income does. There are times when your wealth will decrease, but the worst recessions in recent history have at most cut wealth (or more precisely: stock index values) in half, and in all of these cases the markets recovered within the next decade. This doesn’t mean there will never be a market destroying apocalyptic depression, but if that happened, it would wipe out all sources of income as well.

Wealth is also more robust than income because converting it to income through investing results in a positive feedback loop of building more wealth. If any of the income that you generate is disposable, you can invest that excess, and your wealth will grow through the power of compound interest. Converting income to temporary wealth through the use of debt doesn’t have a comparable positive feedback loop. Your debt being paid off has no influence on your underlying income, so after you’re able to pay off debt you’re typically back where you started. And in the unfortunate case where you lose your income before your debts have been paid, compound interest will be working against you. Your debts will keep growing while you can’t make payments on them, without providing you any additional value.

So, from a personal finance perspective, it is important to accumulate wealth. But you should do this by accumulating true wealth, slowly saving money from your disposable income over time. Leveraging income to borrow large sums of money won’t truly make you wealthy – it may feel like it while you get to spend money you don’t have, but if you ever want to retire you can’t depend on income alone.

Wealth and Income are (Almost) Interchangeable

With the democratic presidential primary elections starting to take shape, financial inequality is one of the biggest issues, if not the biggest issue, being debated. I remember similar arguments during the run up to the 2016 election, which were essentially the entire impetus for Bernie running for president. I also remember a specific conversation I had during that time, where an acquaintance of mine was making the case that wealth and the estate tax are the wrong way to target inequality – they argued that the source of wealth is income, so the best solution would be increasing income taxes.

This was long before I had done my dive into personal finance, so I hadn’t spent much time thinking about wealth vs. income, and didn’t argue against this view. If someone made the same argument to me today however, I would be more inclined to push back. With a very simple idea (which I was very familiar with at that point, but apparently hadn’t thought about in any substantial way), we can see that wealth and income are essentially interchangeable. That idea is interest.

We’re all familiar with interest. The most common scenarios where it comes up are receiving interest on a bank account or paying interest on a loan. But what you might not have realized, at least not explicitly, is that interest is a way of transforming wealth into income or vice versa. When you take out a loan, you are giving up some of your income for access to wealth. And when you invest or lend money, you are receiving income in exchange for giving someone else access to your wealth.

Of course, while interest provides some mechanisms to convert between income and wealth, it does not make them identically equivalent. There are a few important differences from a more explicitly personal finance perspective, but I’ll save those issues for another post.

The important difference I’ll cover today comes from how the government (at least in the USA) sees and taxes income vs. wealth. For the great majority of people in the US, the main way they are taxed is via their earned income. When they perform labor that earns them money, a percentage of it goes to the government. The percentage depends on different factors, but the most important one is how much you earn. Someone earning a modest salary might be taxed between 10-20%, while someone earning millions of dollar annually would be taxed at almost 40% (at least in theory – in practice there are tricks for them to pay less).

However, if you are very rich or part of a very rich family, there are two other taxes that will apply to you: capital gains tax and the estate tax. The capital gains tax applies to income that you make from your money (i.e., investments) rather than your labor, and the estate tax applies to wealth transferred to heirs after a rich person dies. These taxes work in a similar way to income tax, where higher “earners” pay higher rates.

I won’t go into detail about the estate tax here, since it is harder to compare to the income tax, but for capital gains it is fairly apples to apples. In both cases, you are taxed on money received, it’s just the source of the money that differs: the income tax affects money you received for performing work, while the capital gains tax affects money you received for having money and investing it.

So how does the government see wealth and income differently? The income tax rates are roughly double the capital gains tax rates. In terms of how much you owe Uncle Sam, it’s much better to “earn” your money through having wealth than to earn it by working. This is one piece of the puzzle of why wealth accumulates and inequality grows, and why inequality is now such a central issue in American politics.

There are a few high profile tax proposals on the Democratic side at present aimed to address inequality:

  • Raising the top marginal income tax rate (publicly championed by Alexandria Ocasio-Cortez)
  • Raising the estate tax (publicly championed by Bernie Sanders)
  • Introducing a wealth tax (that is, a tax on net worth above a certain threshold – publicly championed by Elizabeth Warren)

I think all of these would be beneficial for the country, but of the three I’m most enthusiastic about the wealth tax. It addresses the reality that wealth is more powerful that income, and cuts directly to the problem of wealth inequality most directly.

I’m a bit surprised no one is championing an increase in the capital gains tax (at least, no one that I’m aware of). That would be another more direct route to addressing wealth inequality, and wouldn’t add any tax burden to the majority of Americans.

Internet Fandom and (Self) Gatekeeping

There are very few media franchises that I identify as a fan of. This isn’t because there aren’t plenty of things I enjoy, but because in the age of the internet, my perceived barrier to fandom has become incredibly high.

I was reminded of this recently while watching a recent episode of Um, Actually, a YouTube game show (I guess technically it’s a dropout.tv game show) that tests participants’ knowledge of minutia in various nerdy lore. I was surprised that none of the contestants were able to answer a question about Game of Thrones correctly, when I knew the answer before the host had even finished reading it. The subject of the question is directly referenced in the TV show, and possibly in the books as well (but I don’t remember for sure), so the question was not about some tiny detail hidden in the depths of Game of Thrones mythopoeia, it was about a prominent part of the series’ world building.

I’ve read the books, and I watched the first few seasons of the Game of Thrones TV series, but I lost interest and stopped following it somewhere in the 3rd or 4th season. Hence, I would not consider myself a Game of Thrones fan. It’s possible I’d maintain this distinction even without the internet, since I know plenty of people who participate in watch parties every Sunday night when new episodes are airing, but with the internet, I definitely couldn’t consider myself a “real fan”. Being aware of communities full of people devoting seemingly endless hours to dissecting the series’ content and creating content of their own makes my having read the books and knowing the answer to a trivia question seem inconsequential in comparison.

The internet also reminds me of the knowledge (/obsession) that I lack about things I do consider myself a fan of. For example, it would be hard to deny that I’m a fan of Rocket League, but I don’t think I’d be able to make it through the end of this Rocket League specific game show, and I know very little about the eSports (that is, professional competitive play) side of Rocket League.

Maybe others would gatekeep me from Rocket League fandom, but there’s no doubt in my mind that I’m a real fan.

I don’t really have a point to make about this phenomenon or a statement to make about whether I think it’s good or bad. It’s just something I’ve observed about myself and how I relate to media and the communities that are out there on the internet. All the more power to those who consider themselves fans of all sorts of games, movies, books, and shows, but the depths of all the fandoms out there make it hard for me to count myself as part of more than just a few.

My Resolutions for 2019

In my last post I referred to my current approach to many of my endeavors as “vague discipline” (perhaps as opposed to “focused discipline”). I’m not sure this is a real phrase, so I figured I would start this post by elaborating on what exactly I mean. If I employ vague discipline in an area of my life, it means that I consistently carve out time/effort/energy for that part of my life and/or stick to a loose set of rules, but I’m not making a focused effort to always improve, and I don’t self-reflect and change my rules and approach if something isn’t going quite the way I’d like. Some examples where I have applied vague discipline and it has been successful are personal finance and my PhD. The extent of my personal finance strategy pretty much boils down to two guidelines: don’t spend money on things that aren’t worth the cost and use savings for diverse and productive investments. Following those guidelines, I’ve been able to maintain a 20-30% savings rate since I started my PhD, which isn’t as productive as the focused discipline of people who follow the FIRE movement, but puts me in much better shape than most Americans. While I worked on my PhD, my whole approach was pretty much just to show up every day and try to learn something. I didn’t finish as quickly or with as many publications as people I know who took a more focused approach, but in the end I still successfully completed my PhD.

The best example of an area I have been applying vague discipline with results I am not happy with is rock climbing. Rock climbing has been my main sport for over five years now, and for most of that time I didn’t strive for anything specific beyond going to the gym 2-4 times per week. In my first 6 months climbing I saw clear improvements, but after that any progress became much slower and there have been periods of regression as well. In the past ~6 months in particular, I haven’t felt that strong on the wall. There are many potential reasons, but after some reflection I think that it’s because my changing work schedule has led to me cutting climbing sessions shorter, oftentimes before I’ve really even started to push myself.

This leads into my first resolution for 2019. I’ve always known that I could climb stronger if I started climbing specific training or actually got organized about hangboarding, but I’ve never pursued either. I think this is because climbing is one of my sources of “play” so I’m hesitant to apply any strict structures over it. So for 2019, my first resolution will simply be to have at least two climbing sessions per week that are at least two hours long. With work and commuting, it will require some better planning on my part, but it won’t require busting out spreadsheets or anything like that. If, by the end of March I’m still not happy with how strong I’m climbing, I will re-assess and potentially revise the resolution. (To make this resolution more specifically measurable, I’ll say it warrants revision if I’m not able to climb half the V6s in my gym at the end of March.)

My other resolution for 2019 has to do with an area where I haven’t really been applying discipline, vague or otherwise. Since I started my third part time job in August, I’ve returned back to basically a full time work schedule, which means that all my personal projects have fallen by the wayside. It may have actually started before the job, judging by my post history on the blog. While the blog is not equivalent to my personal projects, they are definitely connected, and so my second resolution for 2019 is to publish at least one blog post per month. My hope is that taking time to write (and think about whatever I’m writing about) will reignite progress on a project or two. But even if that doesn’t happen, the reflection involved in writing should help me be more intentional about other areas of my life – namely my work and career. So here’s to 2019 – I doubt it will have as much self-reflection as the middle years of my PhD, but it should at least have more than 2018.

New Year’s Resolutions

I’ve never really made New Year’s resolutions in the past. I’m all for self improvement, and while I’ve made many resolutions over the years related to diet, exercise, personal finance, dental hygiene, etc. that match typical New Year’s resolutions, I never waited until the start of the new year to make them. My attitude has always been that if a change is important enough for me to make, I’ll make it as soon as I realize the change is necessary/beneficial. While the start of the year can be a good time to reflect on potential changes to make in your life, it’s not inherently better than any other times. For this reason, I’ve often bought into the stereotype that most New Year’s resolutions only last through January – if you need an arbitrary date to make a change, is the change really important enough for you to stick with it?

However this year it looks likely that I’ll be using the arbitrary date of January 1st (or maybe just the month of January, I’m not sure I’ll have my resolutions sorted out in SMART fashion in the next two days) to launch some resolutions. Luckily for me, it seems the stereotype is wrong and most resolutioners are at least partially successful.

There are two main factors motivating this for me. The first is that while in some ways 2018 was a big year for me (I got married!), in other ways it felt stagnant. I think this is mostly due to the fact that 2018 was the first year I wasn’t a student. And while I’ve picked jobs so far that lead to my day to day life being similar to when I was a student, I don’t have the overarching structure where my small, everyday progress is leading to a graduation/degree. Even though I have chances to learn every day, I don’t think I’ve done enough thinking about where I want that learning to take me – and if I don’t particularly care, I haven’t justified that to myself. The second factor is that my wife is in the middle of some fitness goals where she’s made inspiring progress. Seeing her progress is a good reminder that taking a structured approach to tackling goals is way more productive than the vague discipline I’ve been employing recently.

So while I am perhaps resolved to have a resolution or two for 2019, I have yet to figure out the specifics. The short term resolution I will make for now is that within the next two weeks I’ll determine what resolution(s) I want to make for 2019 and write about it in a follow up blog post.

Edit: I forgot a big part of why I don’t like new year’s resolutions, but now that we’re a week into January I’ve been reminded. Around the new year, resolutioners make the gym and produce section of the grocery store crowded and unpleasant. If they spread out the times that they decided to start their resolutions, this wouldn’t happen. Some anecdotal evidence against the article I linked above is that by the end of January, everything is always back to normal.

Effective ROI for investments with recurring costs/earnings

Edit: I finally found the existing term for the idea I was exploring in this post. Basically, this post was an exercise in re-deriving Internal rate of return (IRR) and giving it a worse name and acronym.

In my last post, I discussed how to analyze the “investment performance” of buying goods in bulk or on sale, by applying the idea of ROI (or more accurately, CAGR) to those purchases. As a refresher, ROI can be calculated from the follow formula, and traditional investments are typically expected to have an ROI of ~5-15% long term.

ROI = (\frac{\textrm{final value}}{\textrm{amount paid}})^{(\frac{1}{\textrm{years invested}})} - 1

One point I briefly mentioned in that post was that it’s more valuable to receive payouts from an investment earlier, but for simplicity we treated the payouts from buying in bulk as being received in a lump sum once everything had been used. This wasn’t just for the simplicity of that post, it was also because (as far as I know) there is no version of ROI that accounts for an investment with recurring costs or recurring earnings. If you look for how to calculate ROI/CAGR, the equations all assume a single starting value and a single final value – there’s no room to account for extending or unwinding a position over the course of holding an investment. If you continuously buy and sell shares of a stock, for instance, you can track the ROI of a particular share by pinpointing the date you bought and sold it, but there’s no equation to to figure out the overall ROI that all the shares of that stock have provided you.

Introducing ROIRCE

In this post I’d like to propose a method for characterizing the performance of an investment that has recurring costs/earnings (or more broadly speaking, doesn’t have a single fixed purchase date and a single fixed sell date). I’m sure someone else could come up with a much better name and acronym, but here I’ll call this metric Return On Investment for Recurring Costs/Earnings, or ROIRCE. ROIRCE borrows from how Net Present Value is calculated, in terms of how it discounts costs and earnings in the future.

The ROIRCE value is chosen such that the following equation holds:

 \sum \frac{\textrm{cost}_i}{(1+ROIRCE)^{t_i}} = \sum \frac{\textrm{earning}_i}{(1+ROIRCE)^{t_i}}

where t_i is the time in years from the beginning of the first investment where the associated cost is incurred or the associated earning is collected. The left side of the equation is the effective total cost of the investment, while the right side of the equation is the effective total earnings from the investment, with each discounted based on how long they took to realize.

Unfortunately, this is an implicit function, which makes it inconvenient to solve, and probably precludes ROIRCE from popular use. Here is one algorithm we could use to solve for ROIRCE:

  1. Guess a value, and plug it into the ROIRCE equation
  2. If the left side of the equation is greater, you need to reduce the ROIRCE value, while if the right side is greater, you need to increase the ROIRCE value
  3. Repeat steps 1 and 2 until the two sides of the equation are effectively equal

(For a more specific approach for raising/lowering the ROIRCE value, you could try a bisection method.)

An application of ROIRCE

The reason I was reminded of this idea (which I had intended to write about since I wrote the last post, but never got around to) was a post on the personal finance subreddit about whether after winning a lotto jackpot, one should take the lump sum or monthly payments. Unfortunately I can’t find the exact post anymore, but the options were either to take $60k in a lump sum, or $1k/month for 10 years (which totals to $120k). The actual post was full of good advice about practical concerns like which option is more advantageous from tax and psychological perspectives, but in this post we’ll consider the options in a simple, tax-free world, and approach the choice purely mathematically.

One response argued that it’s better to take the lump sum of $60k, since if you invested it you could expect to have >$120k by the time you would’ve received the last monthly payment. Another pointed out the flaw in this reasoning, which is that with the monthly payments you’re not sitting on them until the 10 years is up, you can invest that money and earn from it in the mean time. I agree with the second poster, since the first poster falls victim to the same oversimplification we made last post.

We can calculate the traditional ROI required for the lump sum to be better than receiving $120k ten years later through the following calculation:

ROI = (\frac{\$120k}{\$60k})^{(\frac{1}{10})} - 1 = 7.2\%

From a simple ROI perspective, if you can earn better than 7.2% returns annually on your investments (which is not a sure thing, but is definitely reasonable), then you should take the lump sum.

However simple ROI doesn’t consider that we could invest each of our monthly $1000 payouts as soon as we receive them.┬áIf we invest each monthly payment over the course of the 10 years, we calculate a ROIRCE value of 17%, a very hard investment to beat! The only common reason I can think of where it would be beneficial to take take the lump sum when considering ROIRCE was if you had a significant amount of credit card debt. This is because getting rid of debt is like investing at the debt’s interest rate, and credit card interest rates can exceed 20%.

ROIRCE for buying in bulk

Using ROIRCE doesn’t add much to our analysis of buying goods on sale, since each incremental item will be held for a different amount of time, and that amount of time can be used in the traditional ROI calculation. We can calculate an overall ROIRCE for buying on sale, but we should still use the individual ROI of each increment to decide how much to buy.

Deciding if it’s worth buying in bulk, on the other hand, can benefit from ROIRCE. Take the example we used in the last post where I can buy a year’s worth of toilet paper for $100, which would cost $120 if I bought my toilet paper month by month. Buying toilet paper in bulk here is like buying an investment that pays me $10/month for a year. While the toilet paper itself isn’t money I can use, each month the $10 I would’ve budgeted towards toilet paper is freed up so I can spend it on whatever I want. Using traditional ROI, we found that in this case buying in bulk provided a 20% return. However, using ROIRCE and taking into account that I can do something productive with my freed up $10/month over the course of the year, the return on buying toilet paper in bulk is 41.3%! The return is almost twice as high when considering the effect of our recurring “earnings,” so for buying in bulk we can analyze our opportunities much more accurately with ROIRCE than with traditional ROI.