Why Do People Borrow Using Constant P2P Lending?

This post is about Constant P2P Lending. If it piques your curiosity and you want to make an account, you can get $10 free by using this link or by using my referral code: leeweinstein

I like to dabble in different types of investments, and as a consequence of this I have small amounts of “play money” (that is, money I can afford to completely lose) in almost all the different categories of investment available to someone who isn’t an accredited investor. One of the first investments I tried (and still use, through LendingClub) was peer-to-peer (P2P) lending, largely because it’s so simple to understand. The idea behind it is that people who need a loan (e.g., to pay off credit card debt) borrow money directly from other individuals. The borrowers get loans for situations they normally couldn’t (or they get better rates), and the lenders get better returns than they would for the debt investing normally available to them.

The biggest problem with P2P lending as an investor is that when a borrower defaults, you lose the money you had lent them. This risk is partially addressed by the interest rates being higher for riskier borrowers. You can also spread your lending over as many borrowers as possible so that you don’t take a big loss when a single one defaults. To give an idea of typical default rates at LendingClub, the interest rates on most of my loans are around 12%, but my effective rate of return has bounced between 6–8% (suggesting a ~5% default rate). These are solid returns, but I’m getting a bit less than 6% right now, and my returns could go much lower (maybe even becoming negative) as the pandemic-induced recession affects more people.

Cryptocurrency-Backed P2P Lending

A different type of P2P lending has been advertised to me for a while now that removes this risk — P2P lending where the loans are backed by cryptocurrencies. In this case, borrowers must put up some cryptocurrency as collateral in order to procure their loan. The advantage for the lender is that they don’t completely lose out when a borrower defaults. Instead, the borrower’s collateral is liquidated and used to pay back the lender. The collateral is also liquidated if its value falls too close to the loan value, to ensure that lender will be repaid. The advantage for the borrower is that they can secure much lower interest rates by putting up the collateral.

Asset-backed loans aren’t new (two of the biggest categories of debt for individuals are asset-backed: mortgages and car loans), but the novelty behind crypto-backed loans are that they can be handled by “smart contracts” (programs that run automatically based on preset rules). With smart contracts, the lender doesn’t need to have any trust in the borrower. Once the collateral is locked up in the smart contract, the only way the borrower can recover their crypto is to pay off the loan.

Constant, the crypto-backed P2P lending service I learned of, offers lenders 7% returns. This is a great rate (higher than I’m getting from my traditional P2P lending at present), but it’s even more impressive because it’s essentially risk-free. I can only think of 3 reasons why you might lose money with a crypto-backed loan, and they’re all much more dramatic (so hopefully less likely) than borrower defaults.

  1. The business isn’t legit. If Constant’s owners decide to take the money and run (like the cryptocurrency exchanges of yore), then you would lose any money you put into it. I don’t think this is likely because the owners would be risking jail time and a lot of general unpleasantness.
  2. Crypto value crashes too fast to be liquidated. The collateral is liquidated when its value drops to 110% of the loan value. If crypto markets crash so fast that the collateral value drops from 110% to less than 100% before it can be sold, then the liquidation wouldn’t actually cover your loan. I don’t think this is likely because as volatile as crypto is, flash crashes are still few and far between.
  3. Your account gets hacked. Since money can be taken out of a Constant account in the form of crypto, you don’t have the same fraud protection as with most financial institutions. But as long as you use 2 factor authentication, it is very difficult for someone else to get into your account.

For a long time, I didn’t bother with Constant because I assumed there was no market for it. Who would want to pay 7% interest when they already had enough money for whatever they wanted to buy (they would just need to sell their crypto)? However, the promise of 7% returns risk-free was enough to finally get me to try it, and to my surprise they originate $100k+ in loans daily, and all my loans have been filled the same day.

Note: since I wrote this, Constant switched to a lending pool model. Basically, rather than individual lenders matching directly with individual borrowers, lenders pay into a pool that borrowers draw from. It doesn't really change the mechanics of being a lender except that now loans are filled instantly and if you can get slightly better rates for longer loan terms (7% on 1 month up to 7.5% on 6 months).

My initial concern was unfounded, and borrowers are really using this service. But that still leaves the question: why take a loan rather than just selling your crypto?

Potential Uses for Crypto-Backed Loans

The classic loan use cases (home improvement, large purchase, etc.) make even less sense when you look at the details of how Constant’s loans work. First, they are paid back in a lump sum at the end of the term, rather than in monthly installments like most personal loans. This means repayment won’t feel very manageable/affordable for someone who needs to take out a loan for a big chunk of cash in the first place. Second, the terms are short: the most common loan term is 30 days, and as far as I can tell no loans are for longer than a year. If you can save up enough money to pay back a loan in a couple months, it really begs the question: why not just wait for a couple months instead of taking out a loan?

While Constant makes basically no sense for classic loan uses, I have come up with some potential reasons borrowers might have.

1. Leverage to buy more Crypto

I suspect the main reason people borrow through Constant is to get leverage on their crypto (similar to margin trading, if you’re familiar with that). Basically, they’re making a bet that crypto prices will go up, and they’re confident enough about this that they’re willing to stake the crypto they already have on this bet.

How exactly does this work? Suppose you’re convinced the price of Bitcoin will rise substantially this month. You already own $15k in Bitcoin, but you want more and don’t have the cash for it. With Constant, you can take out a $10k loan by putting up your $15k in Bitcoin (they require 150% loan value as collateral), and buy more Bitcoin, so you now have $25k in Bitcoin. If the price increases by 20%, at the end of the month you’ll have $30k in Bitcoin, and after paying back the $10k loan you’ll have $20k in Bitcoin (it would actually be a bit lower because of interest and fees, but this is roughly the right amount). If you instead had just held on to your original $15k in Bitcoin, it would now be worth $18k. So through the leverage afforded by Constant, you gained an extra $2k.

Of course, this comes with a big risk. Suppose the value of Bitcoin dropped by 20% instead. If you had just held on to your original Bitcoin, it would now be worth $12k. But with leverage, the $25k of Bitcoin would drop to $20k, and you’re still on the hook for $10k, so after paying back the loan you would have $10k — an extra $2k in losses.

2. Avoiding Fees/Waits

Another thought I had is that borrowers might use Constant to avoid the fees and waits associated with exchanges. After doing a tiny bit of digging, I no longer think this is a valid use case, but the reasons why are relevant to the last potential use case so I’ll still discuss this one.

Here was my thought: it typically costs a ~3% fee to convert crypto to fiat on exchanges, and it might take a few days to get your fiat cash out. A 30 day loan on Constant has a smaller fee (1% origination + 7/12=0.58% interest) than that, and you can withdraw the fiat as soon as your collateral is posted. If you intended to default on the loan, this might still look like a better option than selling.

However, a few details rule this reason out. First, there is a 10% late repayment fee, so if you’re aiming to default then the fee goes from ~1.6% to ~11.6%, which is a steep price to pay to get your money a little faster. Second, from what I can tell, if you default Constant keeps all your collateral, they don’t return the excess value. This would turn a ~10% fee into potentially a 50% fee. Because of the fee structure behind defaults, this use case seems very unlikely.

3. Avoiding tax burden

The final reason I came up with seems possible, but I think it’s unlikely to make up a large portion of the loans. If someone wants to sell their crypto holdings, they’ll need to pay capital gains tax on it. Assuming they’ve held their crypto for a long time, they’d be on the hook to pay almost 20% of the crypto’s value in taxes.

I’m not familiar enough with the details to know if this is true, but it seems plausible to me that taking out a loan has no tax load, even if you don’t pay it back and it gets covered by your collateral. So in this case, a ~10% late repayment fee would be less than the ~20% tax rate associated with just selling, and the borrower would come out ahead.

However, this assumes that the collateral value has dropped down to 110% of the loan value, otherwise the borrower loses more from the excess collateral value than they would from taxes. So in a sense this would depend on them betting that their crypto’s value will drop ~30% over the course of the loan (from 150% loan value to 110% loan value), in which case they’d be better off selling while the crypto value is still high. Maybe this could be mitigated by purchasing a variety of options on crypto, but I haven’t thought of a straightforward way to guarantee that you come out ahead. Overall, it’s quite a convoluted way to try to pay 10% tax instead of 20%, but I wouldn’t completely rule it out because I’m sure people have done more to avoid less.

The bottom line

The only reasonable use case I can come up with for using Constant as a borrower is for leverage, and it doesn’t seem far-fetched that this is what borrowers are doing. After all, if they bought crypto in the first place, it’s clear that they have a big appetite for risk. But from the perspective of a lender, it doesn’t really matter what the borrowers are doing with the money. The loans are essentially risk-free, and there’s enough demand for them to fill all the loans I would make.

For now I will continue to dabble in Constant, but it’s possible that as I grow more confident in the company that I’ll move a significant chunk of my savings there. If it’s as risk-free as it seems, it’s a much more lucrative place to park my money than a savings account or treasury bonds.

Is Sustainable Investing Actually Helpful?

I graduated a couple years ago, but it’s only recently that I’ve had a cushy full-time job that pays me significantly more than I need to live comfortably. And that means that despite my interest in personal finance and retiring early, I am finally succumbing to some lifestyle creep. Fortunately for my wallet, I still have no desire to buy the latest flagship smartphone, move into a bigger house, or drive a luxury car (well, maybe a Tesla). Instead, my lifestyle creep has manifested in spending more for the “sustainable” choice: buying Allbirds instead of Pumas, buying organic rather than conventionally farmed food, and paying the small premium to get my electricity from wind and solar rather than fossil fuels.

While a lot of my choices surrounding consumption have changed, one area that’s been slow to change is where I’m investing. This might seem surprising, given the recent trend of “Sustainable Investing” — also sometimes called Socially Responsible Investing (SRI) or Environmental, Social, and Governance (ESG) investing. Part of the reason I haven’t changed much is that the “sustainable” funds most accessible to me don’t seem to actually be that different from normal index funds: compare the holdings of Vanguard’s ESG fund to their total stock market fund. But I was also hesitant because until recently I wasn’t convinced that buying stock of sustainable companies actually did any good.

My thinking went as follows: if I buy stock in a sustainable company, that money doesn’t go to the company — in the best case scenario all it does is raise the price of the stock (and therefore the valuation of the company) a tiny bit. So changing my shopping habits would do a lot more to give money to companies I think are doing good things than buying stock of those companies. I didn’t even think raising the stock price did any good, but after doing some digging, it seems there are some benefits. Some researchers have modeled the effect of sustainable investing, and found that a preference for sustainable companies does lead to societal benefits that arise from their stock prices going up:

  • It lowers sustainable companies’ cost of capital (basically, it makes it easier for those companies to raise/borrow money)
  • It encourages companies to become sustainable so that sustainable investors will buy their shares and drive their share price up

So at some point it may be worth revisiting the funds I’m invested in and perhaps choose some that hold companies explicitly connected to sustainable activities. But for now, since I’m not willing to pay a 2% expense ratio and the amount of money I have to invest is so small anyways, I’ll leave it to the professionals.

However, even though I’m not shifting all my stock holdings into sustainable companies/funds, I have found some other investing activities that seem like they make a more direct impact. One option would be to buy bonds from sustainable companies, because in that case your money is going directly to the company. (I haven’t pursued this because it’s not super convenient to buy bonds from specific companies.)

Another option that gets your money directly to the company is to buy equity during fundraising rather than buying stock on an exchange. When you buy stock of a well established company on a stock exchange, your money goes to some other person selling their stock, not the company. But if you buy the stock when the company is issuing it, your money goes directly to the company and can be used to do something (hopefully) useful and good. I didn’t think this type of investing was available to the general public, but it turns out there are some opportunities that don’t require you to be an institutional or accredited investor. Here’s one example I’ve been using.

From the finance perspective, is investing in these startups as smart as just putting more money into my brokerage account? Definitely not: it’s way higher risk. Consequently, I’ve just been putting small amounts of “play money” into these companies (i.e., money I can afford to completely lose), more to feel like I’m doing some good than to achieve financial goals. So while my investment choices haven’t shifted as rapidly as where I’m buying my clothes, I’m at least at the beginning of a path to sustainable investing that I actually buy into.

2019’s Resolutions

I don’t normally make new year’s resolutions, because if I notice a change I want to make in my life, I start right away. However, for 2019, I did happen to make two resolutions that started along with the new year. Since I shared those on the blog (I guess to hold myself accountable), I figured I’d do a check in since it has now been a year.

My first resolution had to do with not feeling particularly strong with rock climbing, which I thought was because I wasn’t spending enough time at the gym. I had moved to an apartment much further from the nearest rock climbing gym, so I was tending to climb only on the way to or from work, and the sessions tended to be pretty short. I resolved to fix this by having 2+ hour sessions at least twice a week. On that front, I think I fully stuck with my resolution. I’m sure there were some odd weeks where I didn’t get two long climbs in (e.g., because I was sick), but overall I have been spending much longer at the gym than when I first moved. However, I also had a checkpoint, which was that I should revisit my resolution if I was unable to climb half the V6s in the gym at the end of March. I could only climb ~25% of the V6s in the gym at the end of March, but didn’t end up doing anything to revise my resolution.

I think the challenge there is that while I stuck with the specific resolution I made (climbing more), achieving the actual goals of climbing harder problems and “feeling strong” is much fuzzier. Climbing grades can be very subjective, so even though there’s only one V6 up in the gym right now that I’ve climbed, I feel about as strong as I’ve felt at any other point this year. So on the climbing resolution, I would say I mostly kept up with the letter of the resolution, but I don’t know if I kept up with the spirit of the resolution.

My second resolution was to write one blog entry per month, with the real goal of sparking progress on personal projects. I did a decent job of sticking with the specific resolution – I’ve written 9 entries since I wrote the post on my resolutions, for a 9/12 completion rate, a solid C. It’s hard to say how much it helped with the underlying goal of working on personal projects. I have made progress on certain things, mostly related to being handy around the apartment, but I think the change in my job status has also undercut the resolution a bit. Since I’m now full time at Brilliant, I have much more ownership over projects there, and don’t really need separate things to work on to feel like I’m making progress on something that’s mine.

For an academic overachiever, a 75% completion rate might seem bad, but given my transition to full time work I’m pretty happy with how many blog posts I wrote last year. I would like to continue posting when I can, as I still have plenty of ideas I want to flesh out through writing, but I probably won’t keep the explicit goal of one post per month moving forward. Hopefully I’ll make another post before we’re too deep into 2020.

Probabilities of Opposed Checks in Dungeons & Dragons

In my last post on D&D, I wrote about differences in the probabilities of the success of ability checks using modifiers (an old system) compared to advantage/disadvantage (a newer system). That was a pretty innocuous topic, since it was just applying math to different ways of interpreting dice rolls. In this post, I wanted to explore a potentially more controversial topic: the probabilities of opposed checks. This is potentially controversial because it gets a bit into the question of whether certain aspects of D&D are “realistic enough”, which some people might argue is an important question, since the universe of D&D is governed by many of the same physics as our own universe, while other people might argue the opposite because D&D is fundamentally a fantasy game. In any case, I’m still interested in the math, so hopefully my results don’t make any D&D players with strong opinions upset.

With a normal ability check, you roll a 20 sided die (“d20” for short) add modifiers, and compare the result to a fixed number. (If you have advantage or disadvantage on the check, you roll 2 d20s instead and take the higher or lower number, respectively.) For example, an easy task might require that your roll plus modifiers be 10 or higher, while a very difficult task might require that your roll plus modifiers be greater than 25.

In an opposed check, two characters are pitted against each other, so instead of needing your roll to beat a fixed number, the winner of an opposed check is whoever rolls highest after modifiers have been applied. A common example of an opposed check is when one character attempts to grapple another character: the two players involved make an opposed athletics check, and if the instigator rolls higher, they have successfully grappled the other character.

We can represent the probability of different outcomes visually. Each player rolls a d20, which means that there are 20*20 = 400 possible outcomes, which we can lay out in a grid. In the simple case where both players have the same modifier for the check, if they roll the same number the result is a tie (grey squares in the chart below), if player 1 rolls higher they win (blue squares in the chart below), and if player 2 rolls higher they win (red squares in the chart below).

The relative area of each color tells us the probability of that outcome. For simplicity, we’ll treat the grey squares as being half red and half blue (depending on the exact situation/house rules being used, a tie could lead to either player “winning” the opposed check). For the situation above, the area is then half red and half blue, matching our expectations that evenly matched characters should have a 50% chance of winning.

However, for mismatched characters, the probabilities might not be quite what you’d expect. A warrior with a high starting strength score (i.e., not boosted by magical spells or special items) who is proficient at grappling might have a modifier of +6 on their check. They could attempt to grapple a wizard with unremarkable strength who is not trained at grappling, corresponding to a +0 modifier on their check. In this case, (with player 1 being the warrior) the line dividing the areas will move down by 6 places, since the warrior’s modifier is +6 compared to the wizard.

The total area of the chart is 20*20 = 400, while the area of the wizard’s triangle is now 1/2*14*14 = 98. That means that even though the warrior is built entirely around being strong (and good at grappling), and the wizard has no specialization in grappling, the wizard will still be successful 98/400 times – a roughly 25% chance of winning the opposed check.

We can remedy this a bit if the warrior has advantage, but the wizard still might have a better chance than you expect. In this case, the warrior gets to roll twice and choose the higher roll, so there are three d20s being rolled in total, for 20*20*20 = 8000 possible outcomes. We can represent the new chart as a cube, with each edge corresponding to one of the d20 rolls. For the wizard to succeed, they’ll need to roll higher than both of the warriors rolls (after accounting for the modifier), so there’s just one corner of the cube of possibilities that corresponds to their success:

The volume of the wizard’s corner of the cube is 1/3*14*(1/2*14*14) = 457, so the ratio of that corner to the total cube volume corresponds to 457/8000, or a roughly 6% chance of success. This probably matches our expectations of reality much better, but it’s still a decent chance for the wizard to succeed – better than some lopsided boxing odds, for example, even though in that case both fighters will still be professionals.

For many D&D players, this analysis is completely worthless, because a lot of the entertainment of D&D comes from the high variance and wacky, unexpected situations. But it does tell us something useful for players who enjoy a game that’s more grounded in reality. For situations that shouldn’t depend much on variance, you might not want to call for a die roll at all (e.g., in an arm wrestling match, a character with 2+ strength more than the other character could win automatically, unless the weaker character cheats). And even in situations that do have some variance, you may want to grant the character with the higher modifier advantage more readily than normal, so that the associated probabilities match something closer to what we’d expect.

How Much Will Planting 20 Million Trees Help Prevent Climate Change?

A fundraiser called TeamTrees was recently started by a group of YouTube content creators with the goal of raising $20 million by the end of the year, which will go towards planting 20 million trees. While trees provide lots of environmental benefits besides sucking carbon dioxide out of the air, the ostensible reason for planting these trees is to fight climate change. When I heard about this fundraiser, it made me curious: how much will planting 20 million trees help? None of the folks that are part of the fundraiser seemed to address this question in their announcement videos, so I figured I’d do some rough estimates myself.

A note up front: just because I’m taking a critical look at how much the fundraiser will help doesn’t mean I think it’s not worth donating to or that it’s a waste of time/money/energy. I donated to TeamTrees, and I make monthly donations to a few different environmental organizations. Please read through the full post before making assumptions about what I think of the efficacy of the fundraiser.

One way to approach this question is to look at how many trees there are in the world. According to a research paper in Nature, there are about 3 trillion trees on Earth, and there were roughly 6 trillion trees at the beginning of human civilization. Thus, humans could be considered responsible for cutting down or otherwise killing roughly 3 trillion trees. From this perspective, 20 million trees seems like barely a drop in the bucket: we’d be restoring about 7% of 1% of 1% of all the trees we’ve cut down.

We can also take a more direct look at the effect of planting trees on the amount of carbon dioxide (CO2) in Earth’s atmosphere right now. Atmospheric CO2 is the main reason we’re experiencing global warming – I won’t go into the details in this post, but essentially to stop climate change we’ll need to stop putting more carbon dioxide in the atmosphere. Atmospheric CO2 is measured in parts per million, or ppm. Currently, Earth’s atmosphere has about 410 ppm, which means that for 1 million air molecules, about 410 are carbon dioxide. Prior to human civilization, atmospheric carbon dioxide was at around 280 ppm. So we’ve been responsible for an increase in CO2 of roughly 130 ppm. A 1 ppm increase in CO2 corresponds to roughly 7 trillion kilograms of carbon dioxide.

So the question then becomes, how much CO2 does a tree take out of the atmosphere? Some rough estimates are that a tree can absorb about 20 kg of CO2 per year while it’s growing and about 1000 kg of CO2 over the course of its life. Assuming that each of the 20 million trees absorbs 1000 kg of CO2, all the trees combined would absorb 20 billion kg of CO2 – far short of the 7 trillion kg required to reduce atmospheric CO2 by just 1 ppm.

These calculations might seem discouraging, and they explain why none of the TeamTrees participants made a video about them. It will take a lot of work to stop global warming – so much so that planting 20 million trees would be a rounding error within a plan that could actually reduce atmospheric CO2 by the ~100 ppm required to return us to pre-industrial revolution levels.

Does that mean that TeamTrees is bogus and shouldn’t be bothered with? Definitely not. For one, trees provide benefits beyond just absorbing carbon dioxide. But beyond that, planting 20 million trees will ideally be viewed as a first step. If you donate to TeamTrees and then go back to living a high carbon footprint lifestyle guilt free, then TeamTrees isn’t doing much good. On the other hand, if you donate to TeamTrees and continue to think about your carbon footprint in the future, reducing your consumption over time and contributing to environmental efforts long after the 20 million trees have already been planted, then TeamTrees can really be viewed as a positive force against climate change. There will be a lot more work required after the 20 million trees get planted, but that doesn’t mean they’re not worth planting.

Advantage/Disadvantage vs. Direct Modifiers in Dungeons & Dragons

This year, after a hiatus that lasted a couple decades, I started playing Dungeons & Dragons again. I’m a bit late to the D&D renaissance – it has broken into the mainstream so thoroughly that it has appeared in a number of popular (read: target audience != nerds) TV shows. D&D is popular enough that I assume anyone reading this is familiar with the basics: players take on the role of heroes and collaboratively tell a story, using dice rolls to determine the success of their attempted actions.

The most common roll in D&D is using the result of a 20 sided die and comparing it to some pre-determined threshold value set by the dungeon-master (a sort of narrator/referee for the game). Your character’s chance of success isn’t left to being a coin flip: if they’re attempting something they’re good at, like a keen eyed elven archer shooting her bow, you’ll get to add a number to the die roll before comparing it to the target value. Likewise, if they’re attempting something they’re bad at, like a dim-witted orc trying to see through an illusion, you’ll have to subtract a number from the die roll. These added/subtracted numbers, called modifiers, have been used since the first version of D&D, played some 40 odd years ago. The most recent edition of D&D (5th edition, released in 2014) still uses modifiers, but it has also added a new twist: advantage and disadvantage.

Previously, everything was handled with modifiers: both the inherent abilities of your character and the circumstances of a particular moment. For example, the elven archer might get a +6 modifier on any attack made with her bow, and if she was attacking an unsuspecting victim who hadn’t noticed her yet, she might get an additional +4 modifier. Depending on the circumstances of a particular action, many different modifiers could apply, and you would add them all up to find the final modifier to use. In 5th edition, there are still modifiers, but they primarily apply to the inherent abilities of the hero. The circumstances of the particular action use a new system called advantage and disadvantage. Most checks will be made without advantage or disadvantage, and you simply roll the 20 sided die and add your inherent modifier. If the circumstances are favorable to your character’s success (e.g., the aforementioned bow-shooting while not being noticed), you can roll with advantage, which means you get to roll two 20 sided dice and take the higher value. If the circumstances are unfavorable, you roll with disadvantage, meaning that you roll two 20 sided dice and take the lower value.

The advantage system is more elegant, as you no longer need to determine a numerical modifier for each situation, you just decide if a situation calls for advantage, disadvantage, or neither. However, it’s also less flexible, as it can’t accommodate any subtlety between cases where advantage does apply. With positive modifiers, you can give +1, +2, +3, and beyond. With advantage, you either get advantage on the roll or you don’t.

When I first learned about this system, advantage seemed incredibly powerful to me, and like something that should be used sparingly. Getting to roll twice and choosing the higher value intuitively feels like you should almost always succeed! But as we’ll get to in the real meat of this post, that is not necessarily the case. Since this is ultimately all about probability, we can convert between advantage and an “effective modifier”, to see how much likelier advantage makes us to succeed on a roll.

The target value you are trying to beat (or match) with your roll is called a difficulty class, or DC. Without modifiers or advantage/disadvantage, it’s simple to calculate your chance of success. There are 20-DC sides that would beat the DC, and one side that would match it. A fair 20 sided die has an equal chance of landing on any of its 20 sides, so your chance of success is given by:

\text{prob. success}=\frac{20-DC}{20}+ \frac{1}{20}= \frac{21-DC}{20}

If we add in modifiers, it doesn’t complicate things much. A modifier of +3 means that there are three additional sides we can roll on that die that will lead to success, while a modifier of -2 means there are two fewer sides. So, adding this into our equation, we get:

\textrm{prob. success}=\frac{21-DC+\textrm{mod}}{20}

We can see that changing the modifier by 1 changes the probability of success by 1/20, or 5%. This corresponds to the 20 sided die having a 5% chance of landing on any given side, and changing the modifier by 1 leading to one additional (or fewer) side of the die leading to success.

This makes it very easy to see how changing a modifier affects probability. Assuming that the DC is in the range where it will be possible for us to succeed or fail (i.e., it’s not extremely low like -4 or extremely high like 37), a +2 modifier will always improve our probability of success by 10%, and a -5 modifier will always decrease the probability of success by 25%.

To see how advantage and disadvantage affects our probability of success, it is helpful to define a more convenient version of DC. Instead, we’ll use an “effective DC”, which we calculate as EDC = DC – 1 – mod. This allows us to rewrite the equation above in a cleaner way:

\textrm{prob. success}=\frac{20-EDC}{20} = 1 - \frac{EDC}{20}

And we can also calculate our chance of failure:

\textrm{prob. failure}=\frac{EDC}{20}

To calculate probabilities of rolls made with advantage or disadvantage, you need to understand the probabilities of independent events. Basically, the result of one roll doesn’t affect the result of the other roll – the two rolls can be treated as independent occurrences. When we roll with advantage, we get to choose the higher number, so to fail when rolling with advantage, it’s like we would need to fail twice in a row. The probability of failing twice in a row is just the probability of failing once times the probability of failing once:

\textrm{prob. failure w/ adv.}=\left(\frac{EDC}{20}\right)^2

and thus the probability of success is just 1 minus the result above:

\textrm{prob. success w/ adv.}=1-\left(\frac{EDC}{20}\right)^2

With advantage, we’re squaring the fraction that we subtract from 1, so clearly we have a greater chance of success, but it’s not as simple as the case with modifiers, where we could say that changing the modifier by 1 changes the chance of success by 5%. There’s no fixed change with advantage, it depends on your original chance of success.

(Note: you can do a similar calculation for disadvantage, but in that case the chance of success w/ disadvantage is chance of success squared and chance of failure with disadvantage is 1 – chance of success squared. For the rest of this post I’ll only work through examples with advantage, but the same principles apply to disadvantage.)

We can see the varying benefit of advantage in practice by looking at some sample EDCs. Let’s first consider the very difficult EDC of 18 (meaning you’d need to roll a 19 or 20 to succeed). Without advantage, the probability of success is 10%:

\textrm{prob. success}=1 - \frac{EDC}{20} = 1-18/20 = 0.1

With advantage, the probability of success is 19%:

\textrm{prob. success w/ adv.}=1-\left(\frac{EDC}{20}\right)^2 = 1 - 0.9^2 = 1 - 0.81 = 0.19

Thus, advantage improved our chance of success by 9%, which corresponds roughly to a modifier of +2 (which would give us a bonus of 10%).

Next, let’s look at the case of an easier EDC of 8 (meaning you’d need to roll 9 or higher to succeed). Without advantage, the probability of success is 60%. With advantage, the probability of success is 84%. Thus, advantage increased our odds of success by 24%, corresponding roughly to a modifier of +5.

As it turns out, the increase in probability of success is greatest for moderate EDC values. With a very high DC, you are still likely to fail even with advantage, and with a low DC you are likely to succeed even without advantage, so the addition of advantage doesn’t change the probabilities much. But if you have a roughly 50% chance of success, adding an extra attempt is the most valuable. You can see the equivalent modifier when you have advantage based on EDC in the plot below:

From this we can see that advantage can be equivalent to a +5 modifier, which is quite strong, but that advantage is capped, and it’s less powerful than my intuition originally suggested. So while I definitely could have had plenty of fun playing D&D without having thought through this math, it has let me grant advantage (or impose disadvantage) on rolls without being worried that it’s “overpowered”.

The Marginal Cost of Driving a Mile

For car owning Americans, vehicle expenses will almost always take up a significant portion of the monthly budget. Cars cost less than housing, and might cost less than food, healthcare, or childcare depending on specific circumstances, but they cost more than pretty much everything else. Since the great majority of American households (~90%) own cars, it is thus natural that cars are going to be a big topic when it comes to personal finance advice. Some examples of great personal finance advice that has to do with cars:

But most of the advice I’ve found deals with either which car you’re buying (e.g., luxury vs. standard, new vs. used) or whether you should own a car at all. Often the discussion surrounds the per-mile cost of driving and uses that to compare cars to other modes of transport. The IRS even provides an official value for how much you can deduct: $0.58 per mile driven. While this might be a decent estimate for the overall cost of car ownership, it’s not as useful for making day to day decisions of whether to drive someplace or bike/take public transportation.

For the day-to-day decisions, the marginal cost of driving a mile is a much more useful metric. That is: how much more expensive it is to own and operate the car for each additional mile driven. The idea here is that the costs of a car can be separated into a fixed portion (that you pay no matter how much you drive) and a variable portion (that increases proportionally to how much you drive the car). I’m in a one car household: I get by day to day without a car, but my wife has a car that she needs in order to commute. When we go into the city on the weekend, we have to decide whether to drive or take public transportation. For this case, it’s not really fair to include the fixed portion of the car cost in our decision, because we’re going to have pay that whether we drive on the weekends or not.

I think that a simple (but reasonable) treatment would be to say that the car’s principal cost (or depreciation, if you prefer to think about it that way) and insurance costs are fixed, and that the gas and maintenance costs are proportional to distance driven. Here are my justifications:

  • The year the car was made is more important to resale value than the number of miles on the car
  • Unless you use per-mile insurance (like MetroMile), how much you drive has relatively little influence on your bill
  • I don’t think anyone will argue with a model saying that gas burned is proportional to distance driven
  • While you are supposed to have maintenance check ups at regular time intervals even if you haven’t hit the next miles driven checkpoint, the costs of the actual repairs should scale more with distance driven

So regarding the costs that constitute the marginal cost of driving: gas expenses are simple enough to calculate. Your per mile cost is simply the cost of gas divided by the fuel efficiency of your car. E.g., if you drove a car that got 28 mpg, and gas costs $3.19/gallon where you live, then the gas cost per mile would be:

\frac{\$3.19\textrm{/gallon}}{28\textrm{ mile/gallon}} = \sim\$0.11\textrm{/gallon}

The exact gas cost to drive a mile will vary depending on how efficient your car is and how expensive gas is in your region, but for most places in the US, it should be in the ballpark of $0.10/mile. Maintenance isn’t as simple to calculate, and will probably vary over the life of your car, but most estimates put it around $0.10/mile for a standard commuter car.

Using both these numbers, the marginal cost of driving should be around $0.20/mile, rather than the $0.58 figure from the IRS and other numbers higher than $0.30/mile that are typically cited when talking about the overall cost of driving. Using this $0.20/mile figure, it’s definitely cheaper for us to drive in to Boston for a weekend trip – it’s about 20 miles round trip, or $4, which is less than a round trip fare for one person on the subway.

For our use case, if we’re only focusing on the cost of subway fare vs. the marginal cost of driving, we should almost always drive. But that’s not necessarily the only thing to consider. Other factors that could change the math include:

  • The effect on the environment
  • The parking situation
  • The difference in time
  • The difference in comfort
  • Getting some exercise

Going through each of these bullets, in order: one obvious difference between driving and public transportation is that the latter is much better for the environment. If that’s something you care about (and it is for me), it’s probably worth factoring it into the math as well. There are different ways you could calculate the value, but I think adding 5-10 cents/mile to the cost of driving is a reasonable way to quantify this factor. It doesn’t drastically change the math, but it does push us closer to favoring the subway (especially if you were driving one person and not two).

Depending on where in the city we’re going, a huge factor is parking. There are some trips we make where there is ample, free parking, and that doesn’t add any cost/inconvenience, but there are some places in the city where free parking is almost impossible to find, and lots might cost around $25. This is obviously a huge factor, and if we’re going somewhere with expensive lots, we’re much more likely to take the subway. If you’re paying for parking, that cost can be factored in directly. If you’re spending a significant amount of extra time looking for parking, that brings us to the next point.

In practice, the main reason people drive over taking public transportation is probably that it’s faster, and (when it’s true) that’s a good reason. The less time we spend traveling, the more time we can spend doing things that actually matter. A decent starting point would be to value your time at your hourly wage, but there are fair arguments to be made to push that up or down. In any case, driving to Boston on the weekends usually saves us about an hour round trip compared to taking the subway, so that’s a huge factor in favor of driving. For normal commuting hours, driving to Boston might actually take longer, and give an edge towards the subway, so it definitely depends on the specific situation.

It’s also worth factoring in the difference in comfort between the options. If I’m traveling with my wife, the car is slightly nicer: we’ll probably chat and listen to music, whereas when we take the subway together we’re a bit more likely to sit in silence. If I’m traveling alone, the subway is much nicer: I can relax and listen to a podcast, compared to focusing on driving (and probably getting annoyed by other Boston drivers) when I’m in the car. On this point I’d probably just add a mostly arbitrary bonus to the one I prefer: maybe adding a $5/hour discount to driving when I’m with my wife, and adding a $20/hour discount to taking the subway when I’m by myself.

It could also be worthwhile to consider the difference in exercise between options. In the example I’ve been using (driving vs. taking public transportation) it doesn’t matter much, since the only potential exercise would be a short walk to and from the subway stations, but if you’re deciding between driving and biking this will matter more. If you’re someone who would benefit from more exercise (which basically all of us are), then time spent exercising should be valuable to you, and time spent driving should not. A decent starting point would probably be to value time spent exercising at your hourly wage, but again, there are reasonable arguments to be made to shift this up or down.

In general, I try to avoid driving, and I default to thinking it’s the wrong option, but going through this exercise and looking over this list of points does give me some confidence that it’s not the worst decision in the world when we drive into Boston on the weekends. As long as we aren’t paying $25+ for parking, it seems like paying for the marginal cost of driving those extra miles is worth it.

Variations on Capture The Flag

Even though the arrival of summer no longer corresponds to a long break from school/work for me, it still reminds me of the weeks spent at summer camp when I was growing up. In my elementary school years, one of my favorite games to play at camp was capture the flag (CTF). There’s something deeply compelling about the large scale of the game, and the teamwork and coordination required to win. After playing for many summers, however, I started to realize that there are some big problems with the mechanics of the “classic” version of CTF. Perhaps it was from playing more video/board games and looking at the summer camp staple from a game design perspective, but at some point I became convinced that there should be many ways to improve on the classic version of capture the flag.

“Classic” Capture the Flag

I imagine most people reading this are familiar with capture the flag, but there are enough variations that it is still worth defining what I consider the classic version and the rules for the majority of the games I played in. The game is played on a large open field divided in half, with each of the two teams taking one half of the field as their “home” side. Each team has a flag on their side that the other team is trying to retrieve and bring back to their own side. Successfully retrieving the flag earns your team a point (or wins the game outright, if you’re not playing for a fixed time). If you are tagged by an opposing player while on their side of the field, you go to a “jail” on the opponents’ side of the field. Players in jail are freed if a non-jailed player from their team tags the jailed players. The players in jail can form a chain by holding hands in order to stretch further from the jail spot to make it easier for teammates to free them. There is a “safe zone” around the flag spot, so that if you reach the flag, you can take a breather without being tagged before trying to run the flag back to your own side.

The fundamental issue I have with this standard version of capture the flag is that the optimal strategy is very defensive, and results in slow, war of attrition type game play. In any game of CTF, you need to distribute your resources (players) between offense (trying to capture the opposing flag or free your jailed players) and defense (protecting your flag and keeping opposing jailed players from being freed). Sending players to try to capture the flag is risky: either you succeed and win, or you fail and some of your players are jailed. If you have a lot of fast players, then you’re likely to succeed, but for balanced teams, the chance of success of any given attempt is pretty low. Thus, sending players on offense at the beginning of a round is generally a bad strategy. It is better to play defensively until you’ve jailed enough of the opposing players that their defenses are stretched thin and you have a higher chance of capturing their flag. Unfortunately, if both teams adopt this strategy, then CTF becomes a game of sitting around and waiting more than anything else, which is no fun for either team.

I haven’t played capture the flag since undergrad, so maybe my theory-crafting about it so many years later misses the mark of what it’s actually like to play, but in any case, all these years have given me the opportunity to come up with many variations on the classic game that address what I view as the fundamental flaw of the game.

Variations on Jail

Jail is probably the most problematic aspect of CTF, as it’s basically player elimination, one of the most infamous game design mechanics out there. Perhaps as a consequence of this, there are plenty of variations on the typical jail rules, and jail is the main aspect of the game where I’ve seen different rules actually implemented at summer camp. All the variations below are ways of making it easier to get out of jail, which discourages strategies that depend on keeping lots of opposing players in jail.

A simple variation, and one I considered adding as part of the classic rules since I think it is fairly common, is that jailed players who manage to tag an opposing player free their team. This discourages “jail guards” from staying too close to the jail, and gives jailed players something more to do, as when they are linked they can coil up and stretch out to try to catch opposing players off guard and tag them.

Another variation is to give jailed players an alternate task in order to release themselves. It could be something physical (do X pushups, do Y jumping jacks) or it could be something mental (solve a Rubik’s cube, solve a Sudoku puzzle). Assuming the players are capable of completing the assigned task, this puts a time limit on how long players will stay jailed, and give them something to do in the meantime.

An even simpler way of assuring that players don’t stay in jail for too long is to have regular “jailbreaks”, when all players are released from both jails. Short intervals (releasing players every 2 minutes) ensure that the game stays fast paced, as players will never spend too long in jail, while longer intervals (releasing players every 10 minutes) doesn’t change the game drastically, but guarantees that players won’t be in jail all afternoon.

The most dramatic change to the rules would be to get rid of jail altogether. The point of jail is to be a negative consequence for being tagged, but you don’t necessarily need a jail to achieve this. An example of an alternative is that rather than tagged players being jailed, tagged players must walk back to their own side (or maybe their own flag safe zone) with their hands on their head until they’re allowed to resume play. Getting rid of jail essentially eliminates downtime, so everyone gets to play for the entirety of the game, and there is little disincentive from sending players to try to capture the opposing flag.

Variations on Field of Play

While it’s not always an easy change to implement, one option that can help push the balance of play towards offense rather than defense is changing where the game is actually played. Rather than an open field, where everyone can see what’s going on and quickly respond to defend when their flag is being threatened, CTF can also be played in a forest or on a campus with buildings between the flags. Obstacles like trees and buildings give players something to hide behind, so there are opportunities to steal a flag through distraction and stealth, rather than just by running faster than the other players.

Another option, if you’re limited to playing on an open field, is to make the sides more complex than just a field cut in half (although you’d probably need a lot of cones/rope to mark the sides in this case). On the classic field, teams only need to worry about opponents coming from a single direction. But if the shape of the two home sides were interdigitated “L”s or “U”s, for example, then flag guards would need to worry about players coming from two or three directions, making the flag harder to guard and making capture attempts more likely to succeed.

Variations on the Flag

Another option for variation is to change the rules around the flag itself. In the classic version of the game, the flag can be handed off, but it can’t be thrown – if not due to the rules, then simply because flags are generally difficult to throw. If the flag is replaced by a ball or a frisbee, then allowing the flag to be thrown between teammates (as long as it doesn’t touch the ground) opens up new offensive strategies. In order to keep it possible to defend against the thrown flag, it’s probably prudent to disallow throws to or from safe zones. For example, you would need to step outside the safe zone around the flag to throw, and you couldn’t throw it to a teammate on your side of the field’s dividing line.

Another change that would open up the field of play would be to have multiple flags on each side. This would naturally make it harder for a team to defend all of its flags effectively, and you could additionally assign different point values to the different flags, which would add a layer of strategy to the game. For example, a flag that is close to the dividing line might be worth just 1 point, while a flag that is deep on the opponent’s side and doesn’t include a safe zone around it might be worth 5 points.

Variations on the Tag Method

Classic capture the flag is played with one hand touch, which means you just need to touch an opposing player with one hand (or really even just one finger) in order to get them out. A simple change to make it harder to defend and easier to capture the flag would be to use two hand touch, which requires you to tag an opposing player with both your hands simultaneously in order to get them out. In practice however, this variation might not work well, as I suspect it would lead to more arguments about whether or not someone was really tagged out (and classic CTF already has enough of those arguments).

Another variant is to use waist flags (like those used for flag football) that must be pulled in order to get a player out. In theory, this should make it more clear cut about whether a player was tagged out or not, but with the added possibilities of a player blocking their own flags with their hands or a flag falling out on its own, it’s unlikely that this method would eliminate accusations of cheating.

Perception vs. Statistics on the Hearthstone Ranked Ladder

When Hearthstone, Blizzard’s take on a collectible card game, first came out, I was hooked. It tapped into my nostalgia for all the Magic: The Gathering I played when I was younger, but offered a much smoother playing experience than what was available for other online “card” games at the time. I played for a few years, but eventually grew bored of Hearthstone. Now I’m back on the online CCG bandwagon with the release of Magic: The Gathering Arena, the new Hearthstone-ified version of the original TCG/CCG.

Playing an online CCG again reminded me of one thing that never really made sense to me about Hearthstone: the way its ranked ladder worked. Most competitive online games use some variant of the Elo rating system. In such a system, each player has a number which corresponds to their skill level – whenever they win a game, that number goes up, and whenever they lose a game, that number goes down. The magnitude in the change of Elo rating is equal for the two players involved in the game (e.g., if I lose to you and consequently lose 5 points, that means you would gain 5 points) and depends on the difference in rating between the players. For example, if a player beats a much lower rated player, they will only get a small increase in rating, but if they lose to that much lower rated player, they will experience a large decrease in rating. Typically this Elo rating is then translated into a rank/tier/division, which broadly groups players by skill level. For example, in Rocket League, the lowest ranked players are Bronze, followed by (in ascending order): Silver, Gold, Platinum, Diamond, Champion, and Grand Champion.

Hearthstone, Magic Arena, and most online “card” based games don’t seem to follow this method, at least not until you get to the very top tier of the ladder (e.g., “Legendary” for Hearthstone, or “Mythic” for Magic). For the portion of the ladder relevant to most players, you start at (or near) the lowest rank at the beginning of every month, a win gets you one “point” (the exact terminology varies game to game), a loss loses you one point, and there are regular point cutoffs associated with each rank (e.g., each subsequent rank requires 5 more points than the last one). There are some specific caveats to each game, but that general system forms the foundation of the ranking system.

At the surface, it might seem like there’s not really a difference between these two systems. Both systems are zero-sum: any points lost by one player are gained by another. And assuming there are enough players so that you’re always playing someone of roughly your same skill level, then the importance of each game is about the same. What is different is the shape of the distribution of player skill ratings, which matters because in the card game system, this distribution is fundamentally connected to player ranks.

In the Elo system, the distribution tends to resemble a Bell curve, since each game should have roughly 50% chance of going either way, like a Galton Board. However, this distribution isn’t particularly important, since the developers can set arbitrary cutoff points for each rank. Given that the Gold and Platinum ranks are in the middle of the possible Rocket League ranks, players expect that an average player would be Gold or Platinum, and the Rocket League developers can set the ratings corresponding to Gold and Platinum to be in the middle of the distribution. If they found that players were happier when they were classified as a higher rank, they could arbitrarily shift the ratings so that Diamond corresponded to the middle of the distribution, without changing how the Elo rating system works overall.

The card game system would also resemble a Bell curve if it was truly zero-sum, but at the bottom of the ladder it is not actually zero-sum. Someone with zero points can’t lose points, so games at the bottom of the ladder result in a net positive production of points. By preventing players from going below zero points, the ranking distribution looks asymmetric and long-tailed, like a Pareto distribution, or a Chi-squared distribution with only one or two degrees of freedom. (Since this can be modeled as a diffusion process, I think of it as being like the temperature distribution in the heating of a semi-infinite solid.) The shape of this distribution wouldn’t matter if the developers set ranking cutoffs arbitrarily, but by keeping the point cutoffs regular or mostly regular, this long tailed distribution in points leads to a long tailed distribution in player ranks as well.

Since I’m sure most players don’t spend as much time thinking about distributions as I do, the card game ranking system becomes problematic because the perception of players’ rankings greatly differs from the reality of the ranking distribution. Back when I played Hearthstone, the effective lowest rank (corresponding to zero points in my above description) was 20, and the highest rank was 1. Consequently, a player might reasonably assume that an average skill level would get them to rank 10 or 11, when in fact, due to the skewed distribution, the median player might be at only rank 17 or 18.

So if you’re committed to this “card” ranking system, with its skewed distribution, what can you do? There are some changes that were built into the original Hearthstone system, like not having completely regular point cutoffs. For Hearthstone, the lowest ranks require 3 points to advance, while the highest ranks require 5 points. This helps spread out the lower ranks a bit more, but is not aggressive enough to really make the ranks match player expectations, and doesn’t address the fundamental problem that points are only created at the bottom of the ladder, and must funnel all the way up the ladder for more players to reach the highest ranks.

The big change I’ve noticed in Magic Arena is that there are way more instances of net positive point distribution. Hearthstone originally only had two sources of net point increase: a player losing at the bottom of the ladder when they couldn’t actually lose a point (but their opponent still gained one) and “win streaks,” where 3+ wins in a row would lead to more than one point per win. (Technically a ladder player beating a Legendary player also resulted in a net gain in points, but this would probably cancelled out by ladder players losing to Legendary players.) In Magic Arena, they added multiple “checkpoints” throughout the ladder, so that once you achieve a certain rank you can’t fall below it. These artificial floors add more net points to the system, since losing at the bottom of the ladder OR any checkpoint will result in a net increase in points (note: I believe Hearthstone implemented a similar change after I stopped playing). The other change is that in lower ranks, every single game is net positive: the winner gets two points and the loser only loses one. Between these changes, I suspect that the ranking distribution for Magic Arena players is much closer to player perception than it was for the original Hearthstone ladder.

All this discussion of how to specifically craft this card game ranked ladder system so that it matches player perceptions/expectations begs the question: why bother? Why not just adopt the Elo rating system used in other games? If you asked the developers of these games, I imagine that they would make an argument about transparency, since it’s much clearer what a player needs to do to advance: “win X more games than I lose”, compared to “gain X Elo rating points (which are often hidden from the player)” in other games. But I believe the real reason is to drive participation. By starting players at or near the bottom of the ladder every month, it forces them to play a decent number of games every month in order to achieve the rank they’re striving for. Free to play games (which basically all of these cards games are) need high participation rates to survive, and these ladder systems allow playing to always be associated with advancement (if perhaps in a more Sisyphean way than the developers would admit).

Wealth is More Powerful than Income

In the last post, I discussed how wealth and income are mostly interchangeable, as debt allows one to convert income into wealth, and investing allows one to convert wealth into income. I noted one primary advantage of wealth (at least in the United States): income from invested wealth is taxed at about half the rate of income from labor. For answering hypotheticals like “would you rather have a guaranteed job that earns $250k/year or a lump sum of $3 million?” the difference in tax rates might be one of the most important factors to consider. But from a more pragmatic “how should I make financial decisions in my day to day life?” perspective, there is another, more important reason why wealth is more powerful than income.

The main reason that wealth is more powerful than income is that it is more robust. It’s relatively easy to lose your source of income: for many people the majority of their income comes from one job, and losing that job means you’ve suddenly lost all your income. There are many reasons you might lose your job, and they’re often out of your control. You may be able to find another job quickly and recover your income, but it could also take months or years to find a new job, and/or you might need to take a pay cut. So income is far from guaranteed over the course of your working career.

Wealth, assuming it is invested in a well diversified portfolio, is much more stable since it doesn’t depend on a single industry or company like income does. There are times when your wealth will decrease, but the worst recessions in recent history have at most cut wealth (or more precisely: stock index values) in half, and in all of these cases the markets recovered within the next decade. This doesn’t mean there will never be a market destroying apocalyptic depression, but if that happened, it would wipe out all sources of income as well.

Wealth is also more robust than income because converting it to income through investing results in a positive feedback loop of building more wealth. If any of the income that you generate is disposable, you can invest that excess, and your wealth will grow through the power of compound interest. Converting income to temporary wealth through the use of debt doesn’t have a comparable positive feedback loop. Your debt being paid off has no influence on your underlying income, so after you’re able to pay off debt you’re typically back where you started. And in the unfortunate case where you lose your income before your debts have been paid, compound interest will be working against you. Your debts will keep growing while you can’t make payments on them, without providing you any additional value.

So, from a personal finance perspective, it is important to accumulate wealth. But you should do this by accumulating true wealth, slowly saving money from your disposable income over time. Leveraging income to borrow large sums of money won’t truly make you wealthy – it may feel like it while you get to spend money you don’t have, but if you ever want to retire you can’t depend on income alone.