My progress during Season 3 of Rocket League

It should be readily evident from some of my other posts that I love Rocket League. I started playing ranked matches during Rocket League’s Season 3 (which ran from June 2016 through March 2017), and towards the end of August 2016 I started tracking my progress in a spreadsheet. I kept track of my skill ratings (similar to an Elo ranking) in the different playlists using a stat tracking website and once a week I would run through each of the default all-star trainings five times as an alternative metric for mechanical skills.

I had originally had very ambitious plans for what I would do with the stats I collected, but since it’s been two months since Season 3 ended I wanted to post something before the project lost all momentum. Perhaps I’ll revisit this in more detail later, but for now the project has culminated in a simple page posted here which lets you explore different ratings/rankings over time or number of games played.

There are a few points I found interesting from exploring the data:

  • My unranked rating (rating from playing in casual playlists) stayed more or less constant over all of Season 3, even though I undoubtedly improved over that time period.
  • Looking at a playlist rating vs. number of games played (e.g., Standard rating vs. Ranked standard games played) I hit a plateau in Duel and Solo standard, but not in Doubles or Standard. Perhaps through the latter half of Season 3, my teamwork improved more than other mechanical skills, which doesn’t show up as much in Duel or (unfortunately) in Solo standard.
  • I plateaued fairly quickly in Keeper and Striker training, while I continuously improved in Aerial training. Keeper plateaued because I quickly approached 100% completion, whereas Striker seemed to always hover around 70%. I’m not sure why I’m not better at Striker even after spending so much time on it.

MIT Grad Admissions Blog

MIT has a fairly well known undergraduate admissions blog. I think usually it’s just anxious high schoolers reading the posts, but some have garnered wider coverage and helped shape how the world views MIT.

The graduate school wanted a blog to cover graduate admissions as well, so starting this year there is a grad admissions blog. They collected the initial round of posts by running a writing workshop over IAP in January, which I was fortunate to be a part of. You can find my posts about some of what I’ve learned as a PhD student there:

Confronting AlphaGo

Get Beyond the Bubble

Debunking the Eco-Cooler

A few months ago, a video was forwarded to me about an air-conditioning unit being developed for developing countries which didn’t require electricity, dubbed the “Eco-Cooler.” It’s not clear to me exactly who or what is behind the idea: the “official” website gives little confidence that it is a serious project, and the videos, while well-produced, seem to be posted through third party accounts and get taken down after a while (this is the original link that was shared with me). In any case, you should be able to search for “eco cooler” on Google or YouTube to find the information I’m talking about, even if these links become defunct.

The science used to explain how the Eco-Cooler works in the video is wrong, and there are others who have already explained this elsewhere (1,2), although unfortunately it seems there’s a lot of noise – incorrect explanations are given along with the correct ones, and skeptics still aren’t sure what to believe. While I’ll take some time to explain why the explanation is wrong, I’m more interested (impressed really) in the experiment the video suggests you try in order to explain the working principle of the Eco-Cooler.

So how does the Eco-Cooler work, according to the video? Air is forced through a nozzle into the house, which pressurizes the air, therefore cooling it. This is bogus. First, pressurizing air heats it, while lowering pressure will lead to a lower temperature. For a real life example of this, you can look at what happens when you use a can of compressed air (or an air horn, or spray paint): as you spray, the can gets cold. When you release air from the can, you’re reducing the pressure inside, and that expansion of gas (not compression/pressurizing) is associated with lowering temperature. Second, the increase in pressure from air flowing through a water bottle nozzle would lead to a negligible change in temperature. If you want to calculate the magnitude of the change yourself, you can use the Joule-Thomson effect, the Bernoulli equation, and conservation of mass – with a back of the envelope calculation I get that air squeezed through the bottle nozzle should heat up by about 0.0001 °C. Finally, even if the air changes temperature as it is squeezed through the nozzle, it will expand as it flows into the room, so the temperature will return to its original value.

So if the scientific explanation they give doesn’t make any sense, why did the video go viral? I think the video’s success is due to the extremely convincing (albeit misleading) “try this yourself” experiment. In the video, the viewer is invited to breathe onto their open palm, first slowly with an open mouth, then quickly with pursed lips. Blowing with pursed lips feels cooler, and they (falsely) claim that this is the same principle which the Eco-Cooler runs on. So what’s actually going on? The air in our bodies is typically warmer than the ambient environment, so if you breathe that air onto your skin, you’ll feel warm (this is step 1 of the video’s experiment). When you purse your lips and blow, the air comes out of your mouth at higher velocity. This leads to more entrained air, that is, the air from your mouth drags along air from the environment with it (this is also how Dyson fans work). As the ambient air is entrained, it mixes with the air from your mouth, lowering its temperature. Overall, the air hitting your hand will be at a higher temperature than the environment (since it’s a mix of high temperature air from your body and ambient air), but it still feels cool since moving air can pull heat out of your body more effectively than still air (this is why sitting in front of a fan feels cool even though the fan doesn’t cool the air at all). To experience this first hand, you can try holding your palm at different distances as you blow with pursed lips. Holding your palm further away should feel cooler, since the hot air from your mouth will have longer to mix with colder ambient air.

So how does the Eco-Cooler actually work? First, I’m not convinced that it does. The video claims the Eco-Cooler can lower the temperature inside a house by 5 °C, but the other content in the video is full of falsehoods, so there’s no reason they couldn’t have just lied about that point as well. That being said, it’s possible that Eco-Cooler could lead to a lower temperature. Air flow through a house will keep the temperature closer to the outside temperature (the house can be hotter than outside because of absorbed sunlight – the same way a car sitting in the sun can get much hotter than its surroundings), and the Eco-Cooler might be more effective than a window because the white panel will reflect sunlight away. However “possible” does not mean “true,” and without much stronger evidence, I am not convinced that the Eco-Cooler is an idea worth pursuing.

Rocket League Halloween Costumes

This Halloween my girlfriend Jaimie and I dressed up as Rocket League cars. She doesn’t play, but I was set on the costume and she wanted her costume to match mine:


We made them ourselves, so ideally I would’ve thoroughly documented the process, but we were a bit rushed for time, since we just worked on them during evenings of the week leading up to Halloween. Instead I just took pictures intermittently, which you can see below. As you can see, the cars are primarily made of taped together cardboard that we painted. The wheels and tires are attached to the main body with toothpicks and superglue.img_20161026_212930616 img_20161027_205000179 img_20161027_221904023 img_20161027_221900556 img_20161027_221911998 img_20161029_151353858

“Debunking” Salt Cases

Seeing something related to your field of study out in the “real world” can be a mixed experience. On one hand, it’s nice to have evidence that the topics you think about more than 99.9% of people (it would probably still be accurate with more 9s, but I’ll be conservative here) are important in contexts other than academia, and that non-academics do indeed sometimes think about them. On the other hand, it can be frustrating to see poorly reasoned/incorrect explanations about something related to your area of expertise.

I’ve encountered a few examples of frustrating heat transfer explanations out in the real world, and in this post I’ll look at one of them: Salt Cases. Salt Cases are cases designed for iPhones that are advertised as protecting the phone from both hot and cold. The basic premise alone is already suspicious, as being able to passively (i.e., without an external power source) maintain an intermediate temperature in both hot and cold environments is not easy to do (at least, not in steady state).

For cold protection, the explanation given is that the case is thermally insulating, and since the phone is dissipating a bit of power as heat (while it’s on, at least) it traps the heat in the phone and maintains it at a higher temperature than without the case. This explanation is completely reasonable – it’s the same reason why you get warmer when you put on more layers of clothes. You generate heat, and by adding the insulation of additional clothing layers, you trap that heat in your body, leading to a higher temperature.

The heat protection explanation isn’t very different: they say that the case is insulating (although with more of a focus on radiative heat transfer, which might be important if the case is in the sun, for example). At first blush, this might seem reasonable: the case is insulating, so if it’s hot outside you want to insulate the phone from that heat. However, when you consider this in combination with their cold protection explanation, things start to seem a little wonky. The phone is still dissipating power as heat when it’s hot out, and the same principle as before applies. So their explanation is akin to claiming that you’re going to wear a thick winter jacket in the middle of summer because you want it to insulate you from the hot weather outside.

Basically, it’s easy for something to be higher temperature than its ambient environment, you just need heat generation and insulation. It’s hard for something to be lower temperature than its ambient environment – with the exception of some exotic techniques, you need a refrigerant cycle and a decent amount of power.

That being said, they show a video where their case clearly leads to a lower temperature when two phones are left sitting in a car on a sunny day. There are a few explanations for why this could be:

  1. The cynical explanation is that they didn’t really leave the Salt Case phone in the car, they kept it somewhere else cooler and moved it into the car to take the video. As easy as this would be to do, I don’t think this is the correct explanation.
  2. They advertise their case as reflecting (and therefore not emitting) infrared light, and the thermometer they use is an infrared thermometer. When this type of thermometer is used to measure the temperature of something that doesn’t emit infrared, it’s doesn’t accurately measure the temperature of that object. I don’t think this is correct explanation either, because in the video they open the case, and it would be weird if the screen protector also reflected infrared.
  3. The case offers heat protection for a limited amount of time. Since phones don’t dissipate that much heat (if they did, you’d often notice that they felt hot in your hand), even if they were perfectly insulated the temperature would rise slowly. A back of the envelope calculation suggests a temperature rise on the order of 1 degree Fahrenheit per minute, so a well insulated phone might not start overheating in 30 minutes, but would after a few hours. I think this is the correct explanation.

Presumably the product’s claimed heat protection works to some degree, or Salt Cases would have many unhappy customers. But as best I can guess, they only provide passive heat protection over short periods of time. In any case, regardless of the effectiveness of their product, the explanation doesn’t really make sense – hopefully some day they’ll update their “technology” page to give an explanation more consistent with my understanding of heat transfer.

Levelized cost of wear

I consider this post a work-in-progress. If there’s any real merit to this concept, I’ll need to come back and clean up the post so that it’s coherent to someone who isn’t already familiar with all the ideas I use to build up the concept.

There’s an idea related to clothes shopping called “cost per wear.” It’s a simple idea for quantifying the cost of different items of clothing that’s (arguably) more useful than their sticker price. To find an item’s cost per wear, you simply divide its cost by the number of times you expect to wear it before getting rid of it:

CPW = \frac{cost}{times\ worn}

You can find plenty of articles about cost per wear (some good, and some which grossly misinterpret it), but in this post I’ll propose an extension to the idea. But before proposing the extension, I’ll recount how I ended up with it. I was first introduced to the idea of cost per wear by my undergraduate research advisor, who claimed that you should never spend less than $2000 on a suit (he always dressed very sharp). The reason he gave was cost per wear: if you buy a cheap suit, you’ll never want to wear it, and you’ll only get a few uses out of it before throwing it away. If you buy a nice suit, you’ll use it as much as you can (for dates, interviews, any vaguely formal event, etc.), and in the long run it will be cheaper – at least in terms of cost per wear. Putting numbers to this, there seems to be some wisdom in his claim. If I buy a cheapo suit for $120, it might last me two years, and I’ll only wear it when I absolutely need to wear a suit (maybe three times a year). This yields:

CPW_{cheapo\ suit} = \frac{\$120}{2 years* 3 wears/year}=\$20/wear

If I buy a primo suit for $2000, it could last me closer to ten years, and I’d want to wear it whenever I had the chance (maybe once a month). Therefore:

CPW_{primo\ suit} = \frac{\$2000}{10 years* 12 wears/year}=\$16.67/wear

Since the nice suit has a lower cost per wear, the argument goes, it’s actually the thriftier purchase. Somehow this idea came up in a discussion with a fellow grad student, and from our perspective it didn’t seem to paint an accurate picture of our situation if we were in the market for a new suit. A point that cost per wear misses is that a dollar in my pocket today is not the same as a dollar in my pocket ten years from now (something that grad students are acutely aware of, as in ten years we imagine our salaries will be at least triple what they are today).

There is another idea, called levelized cost of energy, or LCOE, which is used to calculate the effective cost of electricity from different sources – specifically as a way to compare renewable sources like wind and solar that are “free” once they are installed to conventional sources that required burning fuel which you have to pay for. The piece that’s relevant here is that even if you had a maintenance free solar panel that lasted forever, if you have to pay for the panel initially it’s LCOE would not be zero (i.e., the electricity it generates is not free). This is because even though you get infinite electricity from that panel, it’s spread out over infinite time. And when you consider inflation, the electricity that the panel generates in one or two hundred years has almost no value to you today.

I’m sure this explanation is insufficient for anyone not already familiar with LCOE, but in any case, that’s the idea I borrowed from to come up with “levelized cost of wear” or LCOW, an extension to cost per wear. LCOW is calculated as follows:

LCOW=\frac{cost}{\sum\limits_{t=1}^n \frac{wears/year}{(1+r)^t}}

where n is the number of years you expect the item to last and r is your “personal inflation rate” – nominally we can say that it’s how much you expect your salary to increase annually. LCOW takes into account how the value of money might change over time for an individual, whereas CPW does not. We can revisit the cheap vs. expensive suit using LCOW, using an r of 12% (this would be very high for most individuals, but for the PhD student example, it corresponds to about triple the salary in 10 years):

LCOW_{cheapo\ suit}=\frac{\$100}{\sum\limits_{t=1}^2 \frac{3}{(1.12)^t}}=\$23.67

LCOW_{primo\ suit}=\frac{\$2000}{\sum\limits_{t=1}^10 \frac{12}{(1.12)^t}}=\$29.49

When considering LCOW, the cheap suit is the thriftier purchase. Intuitively this makes sense – my friend and I will have plenty of time to go shopping for nice suits once we have real jobs. In the meantime, it’s a better use of our money to buy the cheapest suit that can get us through graduation.

Rocket League: the most authentic soccer video game ever made

A few weeks ago, at the recommendation of a friend, I started playing Rocket League. In Rocket League you control a rocket powered car in a game resembling indoor soccer: you score points by knocking a ball into the opposing teams goal, and you try to prevent the other team from knocking the ball into yours. While I’m late to the party (Rocket League has been out for a bit over a year now), in these past few weeks it’s completely won me over: right now I’d say I’m spending the majority of my free time playing Rocket League.

I’ve also been playing some actual soccer this summer, and after playing soccer one day I realized why Rocket League is so compelling to me. Rocket League is the most authentic soccer video game I’ve ever played. A natural response might be: “That’s ridiculous! How could a game about cars playing soccer be more authentic than games like FIFA, which are about the actual sport of soccer?” I would not contest that watching a game played out in FIFA is more like watching an actual soccer game, but playing Rocket League captures the feeling of actually playing soccer better than any other video game I’ve played. I am not the first person to share this opinion, but nonetheless I thought I’d take this entry to explain my reasoning in a bit more depth.

The first reason is that the mechanics of Rocket League make the experience much closer to playing soccer. In an actual soccer game, when you want to shoot, pass, or clear the ball, all of these are accomplished by kicking the ball with your foot (well, usually contact is with the foot, but occasionally it could be with the leg, head, etc.). In FIFA-like video games, you perform the different kicks by pressing different buttons, e.g., [x] to pass or [y] to shoot, even though an actual soccer player kicks the ball in both cases, just in a different way. In Rocket League, the “kicks” are collisions, and all the contact between the car and the ball is physically simulated in the game. So whether you want to pass or shoot, in both cases you need to drive your car into the ball. Whether it’s a pass or a shot depends on how fast you strike the ball and at what angle, there’s no pass button you can press that will cause your car to knock the ball towards a teammate.

The result of these mechanics is that when you’re bearing down on a ball in front of the opponent’s goal in Rocket League, it feels like running up to shoot the ball in actual soccer. In both cases you’re acutely focused on how to strike the ball to maximize the chance of it going into the goal. In FIFA, on the other hand, you just press the “shoot” button and hope for the best (I’m probably underselling the control in FIFA a bit here, but you get the idea). Likewise, in Rocket League when you’re trying to clear a ball in front of your goal, you’re trying to get some part of the car between the goal and ball (similar to real soccer, but replace car with leg), not just mashing the “clear” button in the proximity of the ball.

The other big reason that Rocket League recreates an authentic soccer experience is the perspective. While not technically played in a 1st person view, Rocket League is 1st person in the sense that you only control one car, and your field of vision is limited to originating from around where that car is. This is different from FIFA-like games, which let you swap control of players so you’re typically in control of the player with the ball (on offense) or the player challenging the ball (on defense), and are viewed from a perspective similar to how professional soccer games are broadcast, so you can see all the relevant action around the ball.

In a real soccer game, what you do off the ball is very important: your position is critical to creating scoring opportunities for your team and shutting down chances for your opponent. This aspect of team play is largely out of your control in FIFA-like games: what the other players do on your team is automated. In Rocket League, since you only control your own car, you need to decide where to position yourself for the entire game, not just when you’re on the ball. And similar to real soccer, this positioning is very important: if you play like a 5-year-old plays soccer and just chase the ball the whole match, your team won’t have the structure to both attack and defend, and you’ll doom yourself to many losses.

The view that Rocket League is played in (psuedo 1st person) also makes the experience more authentic. In real soccer, situational awareness is an important skill to have, which is recreated in Rocket League by necessitating that you pan the camera up and down the field if you want to know where your teammates and opponents are. In FIFA-like games, the positions of everyone is simply laid out for you by the high viewing angle of the field. The Rocket League view also means you need to actually be able to judge distances and trajectories to know where you should head to intercept a ball or player (as in real soccer), whereas FIFA-like games put a glyph on the ground where the ball is going to land, since it would be intractable to assess a ball trajectory from the high viewing angle they use.

To me, this combination of mechanics and perspective end up making Rocket League so much more fun because it feels like actually playing soccer. In Rocket League it feels like you scored the goal, whereas in FIFA it feels like the character in the game scored, you just happened to tell him to shoot.

Windmills in Red Mars

Note: this post is based off a recitation problem I wrote when I was TAing graduate heat transfer last semester. At some point I hope to upload some of the problems I wrote, including this one, which should provide a more structural/mathematical approach to answering the question (rather than just the conceptual approach provided here).

In high school, I read the science fiction novel Red Mars by Kim Stanley Robinson, which chronicles the efforts of a colony of astronauts to terraform Mars (that is, make it more Earth-like, and therefore habitable to humans). Red Mars is fairly far towards the “hard” end of science fiction, meaning that the story is fueled, as much as possible, by actual science that we currently have a strong understanding of. To give recent Hollywood examples, The Martian is representative of hard science fiction, while Interstellar is representative of soft science fiction. This distinction is important because when something doesn’t click as being realistic (in terms of the science) in soft sci-fi, you’re expected to continue suspending disbelief, while if something doesn’t click as being realistic in hard sci-fi, it means the author messed up.

I read Red Mars many years ago, and in the intervening time I’ve forgotten most of the book. However, there is one plot point I still remember, because I disagreed with my high school science teacher about its scientific accuracy. At one point in the story, the colonists distribute windmills all over the surface of Mars which power heaters, with the intention of raising the average temperature of Mars. This post is concerned with the question of whether or not such a scheme could actually raise the surface temperature of Mars. Later in the story we find (spoiler alert) that the proponent of the windmill distribution plan didn’t care about their ability to raise the planet’s temperature, he wanted to use them to distribute plant life (algae? I don’t remember exactly) around Mars. That’s all fine and good, except that 1. some scientists in the story later find that the temperature of Mars has risen by some small (but apparently measurable) amount that they attribute to the windmills, and 2. the plan should be thermodynamically sound if we’re to believe that the other colonists could be convinced to go along with it, regardless of the plan’s true goals.

My science teacher argued that the plan is unsound, because wind is kinetic energy that will eventually be viscously dissipated and converted to heat anyways, so the windmills are pointless. My argument was that a Mars with windmills will be less windy on average (and therefore have less kinetic energy in its atmosphere) and because of conservation of energy its reasonable that the drop in kinetic energy would be accounted for by a rise in thermal energy (i.e., Mars would be at a higher temperature). For a long time I couldn’t reconcile these two viewpoints, however with some simplifying assumptions, we can capture the physics of the situation in a way that is compatible with both arguments.

Our Mars model has two forms of energy: kinetic energy (how windy it is on Mars) and thermal energy (how hot Mars is). The other important piece to know is that the only way Mars can exchange energy with the universe is by radiation: it absorbs energy radiated from the sun, and radiates out infrared radiation to the universe. The energy it receives from the sun should be constant, but the energy it radiates away is a function of its temperature (the hotter Mars gets, the more it radiates away). That means that Mars has some equilibrium temperature, at which the rate it loses energy to the universe is equal to the rate it absorbs energy from the sun. This system is self-balancing (or has negative feedback, to use control theory terminology): if Mars’ temperature rises above the equilibrium, it radiates away more than it absorbs, so the temperature lowers back to the equilibrium value. And vice-versa if Mars’ temperature drops below the equilibrium value. What is not defined by our simple model is how fast this happens, we just know that given enough time, Mars will reach its equilibrium temperature. From this perspective, my science teacher’s argument is correct, and even if you convert kinetic energy to thermal energy, in the long run the Martian temperature remains unchanged. However, this is only the long term behavior. When the windmills are deployed on Mars, they act to convert some of the kinetic energy in the wind to thermal energy. They will continue this conversion process until the wind on Mars reaches a new, lower, “post-windmill” value. Immediately after the windmills have converted the kinetic energy to thermal energy, they will have temporarily increased the average temperature of Mars.

So the full picture of this simple model for Mars is that with the windmills, the average temperature of Mars could be changed temporarily by converting kinetic energy to thermal energy, but in the long run the equilibrium temperature of Mars is determined by radiative exchange with the universe, which the windmills can’t (directly) address. In terms of how this fits into how realistic Red Mars is, I would say that on the first point, there could be a tiny (temporary) increase in temperature due to the windmills, but on the second point, at least one of the colonists should’ve had a strong enough understanding of thermodynamics to explain that this plan would ultimately be futile, and resources should be diverted to a different endeavor. So unfortunately, for this small corner of the story, it seems that Kim Stanley Robinson messed up.

An easy improvement to the MIT grad housing lotto

Disclaimer: MIT grad programs can be very insular (perhaps this is true of graduate programs in general), which causes an echo chamber effect leading grad students to be more acutely aware of problems that affect them than problems that affect people in general. The problem I’m discussing in this post is very much an MIT grad student “first world problem,” but even so, I’d argue it’s still worth trying to fix, and the following discussion might be applicable to other problems as well.

MIT has an atypically large graduate housing program: about a third of grad students live in MIT dorms, and the proportion among first year grad students is much higher. I lived in the dorms my first two years at MIT, while I only stayed in the dorms at UC Berkeley (my alma mater) for one year. That I lived in graduate dorms longer than I lived in undergrad dorms might surprise some, but it’s simply the result of how good of a deal the MIT grad dorms are. The dorms provide an extremely short commute, facilities personnel that are more responsive than any landlord I’ve dealt with, faster internet than I’ve had anywhere else, and you pay about market rate. Grad dorms are also set up more like apartments: your bathroom is only shared among a few people (not the entire hall) and you have a kitchen, so cooking for yourself is actually feasible. While all these features are quite nice, I feel the MIT graduate dorms have one major weak point: the lottery that is used to determine MIT grad housing assignments.

The process of how grad housing is assigned is described here, and while the process isn’t 100% transparent, enough information is provided that it’s possible to infer how the algorithm they use for room assignments works.

Here’s the experience from the perspective of the grad student looking for housing:

  1. You rank all the dorms (and room types) that you’d be willing to live in. You can also submit requested roommates along with your ranking. This ranking needs to be submitted by early May.
  2. A few weeks later you receive at most one room assignment. If you receive a room assignment and don’t take it, you pay a $250 penalty (not trivial on a grad student stipend). If you don’t receive a room assignment, you can add yourself to the wait list (at which point your fate is truly up in the air).
  3. If your requested roommates received the same room assignment that you did, you will (most likely) be assigned to the same room. If not, you are assigned random roommates.

There are two reasons that this system is frustrating to me. A minor frustration is that the delay between steps 1 and 2 is unnecessarily long (in terms of how complex the assignment algorithm is, although admittedly the delay is likely from bureaucratic reasons). My bigger complaint is with how roommate assignments are handled. When you’re looking for housing, one of the first (if not the first) things you determine is who you want to live with. If you have friends who you want to live with and try to live in the MIT dorms, you risk getting split up or having to pay a penalty, since you can’t indicate to the housing assignment process that you only want to live in the dorms if it’s with a specific roommate. This happened to me my second year at MIT: I got a room assignment while the friend I wanted to live with didn’t, but I stuck with MIT to avoid the penalty. (That experience might be the reason I’m writing this post, even though it happened years ago.)

The housing site implies that both of these issues are due to the “complexity of the algorithm.” They don’t explicitly tell us how the algorithm works, but they do provide three points about what the algorithm prioritizes and how to maximize your chances of getting a room assignment:

  • The first priority is to give room assignments to as many students as possible
  • The second priority is giving students higher room preferences
  • The more dorms/room types you rank, the higher your chances of getting an assignment

Here’s an algorithm (which I would characterize as simple, not complex) to place N students into housing assignments that is consistent with the three points above:

For each student, in order from 1 to N (student number determined randomly):

Perform a “placement search.” Go through the current student’s ranked rooms, until one with a vacancy is found. If a vacancy is found, give the current student that room assignment.

If no vacancies are found, perform a “push search.” Go through the current student’s ranked rooms, and for each room check the students already assigned there. If a student already assigned there has a vacancy in one of their lower ranked options, “push” that student to their lower choice and give the current student that room assignment.

If no vacancies are found and no students in the ranked rooms can be pushed to lower ranked options, the current student does not receive a room assignment.

Here is the algorithm expressed as pseudocode (I apologize that the formatting is terrible, I’ve wrestled with WordPress for long enough that I’ve given up on trying to make it readable for now):

For i=1:N
j = 0
placeSearch = true
pushSearch = false
While placeSearch = true
If student(i).rank(j) == empty
placeSearch = false
pushSearch = true
Else if checkVacancy(student(i).rank(j)) == true
placeSearch = false
Else j+=1
j = 0
While pushSearch = true
k = 0
If student(i).rank(j) == empty
pushSearch = false
pushees = findAssigned(student(i).rank(j))
For pushees
pushSearch2 = true
While pushSearch2 = true
If pushee.rank(k) == empty
pushSearch2 = false
k = 0
Else if checkVacancy(pushee.rank(k)) == true
pushSearch2 = false
pushSearch = false
Else k+=1

We can characterize the complexity of this algorithm using big O notation, and it has complexity O(n2) because it tries to place N students and for each of those students it might need to check order N students during the push search portion (n*n=n2). O(n2) is low complexity: the stable marriage algorithm used to match recently graduated MDs to residency programs is a famous example of an algorithm with complexity O(n2) that completes in a few minutes or less for tens of thousands of participants. Thus even the highly unoptimized version of the algorithm for MIT housing that I outlined above should complete in less than a minute. From a less mathematical standpoint, if the pseudocode for an algorithm can be written in a few dozen lines, it’s not that complex (in the colloquial sense of the word). A note: since roommate requests are submitted from the beginning, it is possible that they are factored into the algorithm that MIT uses as well, but given the language on the roommates section of the grad housing info website, this is unlikely.

I propose a simple change to the algorithm I’ve outlined above that would completely address the roommate portion of my complaint about the MIT grad housing lotto. If 2-4 students know they want to live together, they can apply for grad housing as a group. They would only be able to rank rooms that exactly accommodate the size of the group (e.g., a pair of students could apply for double rooms, but not triples or quads), and after that point the algorithm would treat them as a single student which takes up more than one spot. In the random number assignment, you could either give the group a single number, or you could give each individual student a number and assign the last (i.e., worst) number to the group overall (this would correspond to the idea that the group would only be assigned a room if each individual student would’ve been assigned a room had they applied separately). The complexity of the algorithm would remain the same after this change, and if an algorithm close to the one I proposed is being used already, minimal changes should be necessary in order to accommodate this group application option.

I think this change would be a big improvement to the MIT grad housing lotto, and I think it could also improve the experience of living in the dorms beyond just the application process. Before I listed many reasons why the grad dorms are superior to typical undergrad dorms, but there is one striking difference where undergrad dorms beat grad dorms: in the MIT grad dorms there is very little sense of community. While I attended some events hosted by the dorms while I lived there, I couldn’t tell you the names of any of my neighbors, and I didn’t form any lasting friendships with fellow dorm mates (besides those who I met through other channels, but happened to also live in the dorms). This is in stark contrast to my undergrad experience: a majority of the friends from Cal that I still keep in touch with, I met through living in the same dorm. I think my experience in the MIT grad dorms is typical, with the exception of students who get involved in dorm leadership.

There are many reasons this may be the case. Departments and research groups are more conducive to forming friendships as a grad student than as an undergrad, and as a grad student you spend a lot less time at home. However, I think the MIT grad dorms lack a feeling of community in part because many of the apartments are full of people who aren’t friends. If you aren’t friends with your roommates, you don’t want to spend as much time at home, and you don’t think of home as a place to hang out with friends (I experienced this my second year in the dorms). If the dorms were more accommodating of students trying to live with people they know they’ll get along with, I think it could provide a better environment for building a sense of community. With this change, MIT grad dorms might have more living rooms where people actually spend time together, and consequently might have more open doors and therefore more opportunities to meet neighbors.

While I think this change would be an improvement, I recognize that there are a number of reasons why this feature might be intentionally missing. It would create more demand for grad student housing, which I assume is already at capacity. Perhaps more importantly, by housing assignment’s zero sum nature, if grad housing begins serving the population that wants to room with their friends better, it will be serving the rest of the student population worse. It’s reasonable to me that the administration would be more concerned about grad housing serving the type of student who doesn’t have friends picked out to live with than those who do. That being said, if one of these reasons is true, it would be nice for MIT housing to be honest about their reasons for not implementing group applications rather than hiding behind the (rather condescending) explanation that “the algorithm is complex.”

MIT Massive Annotated Bibliography

I chose to complete my annotated bibliography on Competency Based Education (CBE), which is a trendy topic in education today. At the beginning of the semester, the extent of my knowledge of CBE was essentially that CBE approached teaching in a different manner than traditional “seat time” education. Traditionally, each student spends the same amount of time studying a topic, and at the end of that time they are assessed to see how well they learned it before moving on (they only receive more time to study a topic if they do egregiously poorly and are forced to retake the whole course). CBE aims to instead make level of mastery fixed, and students are given as much time as they need on a topic to reach that level of mastery. In this way, every student should be competent in all content areas before they graduate. This idea, especially in conjunction with ideas I’ve learned about differences in individuals speed of learning (e.g., see The End of Average by Todd Rose) seemed very compelling to me, and a promising direction for schools to move towards in the future. I was also interested in looking at CBE for my annotated bibliography because I am working on a small project with Prof. Sanjay Sarma to investigate how CBE is (or isn’t) used in areas where attaining competency is a matter of life or death (e.g., flying an airplane or performing a surgery). For this reason, the references I found are not all on traditional K-16 education, and many extend into other educational domains.

Chyung, Seung Youn, Donald Stepich, and David Cox. “Building a competency-based curriculum architecture to educate 21st-century business practitioners.” Journal of Education for Business 81.6 (2006): 307-314.

This article discusses applying CBE to curriculum design, in this particular case for business education. The paper provides a nice background to definitions of CBE and terms within CBE. They argue for definition of competencies within each field: “the generic dictionary scales are applicable to all jobs and none precisely.” The paper provides a framework/flow-chart for how one particular program developed a CBE-based curriculum for “Information and Performance Technology.”


Ennis, Michelle R. Competency models: a review of the literature and the role of the employment and training administration (ETA). Office of Policy Development and Research, Employment and Training Administration, US Department of Labor, 2008.

This review covers the use of competency models in assessing employees (or potential employees) and in determining what is necessary of an employee. Here competency is used to mean being able to apply knowledge, skills, abilities, behaviors and characteristics to performing a task. Furthermore, competencies are “specific personal qualities that are ‘causally related to effective and/or superior performance,’ are common across many settings and situations, and endure for some time.” A competency model is a tool for identifying the competencies needed for a specific role (i.e., a behavioral job description). Competency models are hierarchical: foundation which is common to many roles is common, and pieces more specific to a certain job are built on top of the foundational elements. Competency models provide a language and framework for determining the qualification of workers to move into different positions or how to improve their performance in the current position. Tying in assessments to competency based measures is important to successfully using competency based models. WGU pops up in the review as a higher education institution using competency models to try to train better teachers.


Frank, Jason R., et al. “Toward a definition of competency-based education in medicine: a systematic review of published definitions.” Medical teacher 32.8 (2010): 631-637.

This paper seeks to establish a clear, widely accepted definition of CBE within the medical field, which they approached by reviewing the medical literature on CBE (as well as searching Google). Four major themes within definitions of CBE are identified: organizing framework, rationale, contrast with time, and implementing CBE. Their proposed definition of CBE is as follows: “Competency-based education (CBE) is an approach to preparing physicians for practice that is fundamentally oriented to graduate outcome abilities and organized around competencies derived from an analysis of societal and patient needs. It deemphasizes time-based training and promises greater accountability, flexibility, and learner-centredness.”


Hamilton, Neil W., and Sarah Schaefer. “What Legal Education Can Learn from Medical Education About Competency-Based Learning Outcomes Including Those Related to Professional Formation (Professionalism).” (2015).

This report discusses CBE’s adoption in the medical field in the context of how their lessons can help CBE in law (which adopted CBE later). One of the main takeaways are that core competencies of being a doctor were identified (professionalism, patient care and procedural skills, medical knowledge, etc.) and that these competencies are assessable. Another main takeaway was the idea of the “hidden curriculum”: that much of what medical students learn is not in lectures or didactics, but through interaction with attending physicians in rotations at hospitals and clinics. If the hidden curriculum doesn’t support the stated curriculum, it undermines the authority of the stated curriculum as representing the field’s true best interests.


Hodge, Steven. “The origins of competency-based training.” Australian journal of adult learning 47.2 (2007): 179.

This paper discusses the “societal” and “theoretical” origins of CBT. CBT arose in the US in the 50s, 60s, and 70s due to societal trends towards accountability and personalization. Apparently Sputnik was the impetus, and fear of Soviet technological superiority spurred the federal government to play a larger role in education and training. The key theoretical influences on CBT were behavioral psychology (due to competencies being observable behaviors) and systems theory (training as system). The paper also discusses specific theoretical contributions to CBT.


Jamieson, Lynn M. “Competency-Based Approaches to Sport Management.” Journal of Sport Management 1.1 (1987).

This paper discusses the require competencies of sport managers/professionals (not to be confused with athletes). Competencies for such a position are in areas such as business procedures, communications, facility/maintenance, governance, legality, management techniques, etc. This paper’s analysis of CBE in sports management uses a lot of Likert scoring and statistics, but is not particularly insightful with respect to CBE, I thought CBE applied to a relatively unique field would be more interesting.


Jones, Elizabeth A., and Richard A. Voorhees. “Defining and Assessing Learning: Exploring Competency-Based Initiatives. Report of the National Postsecondary Education Cooperative Working Group on Competency-Based Initiatives in Postsecondary Education. Brochure [and] Report.” (2002).

This is a long report (almost 200 pages) that outlines competency based education in the context of higher education. It is intended to be used as a resource for professors or administrators who want to start competency based initiatives at their institution. The report identifies four main categories of “strong practices” in CBE institutions: planning, selecting assessments, ensuring learning experiences are relevant to competencies, and reviewing assessment results for iteration. There are some good specific examples of what the authors identify as competencies (e.g., see exhibit 2, appendices). The report includes a comprehensive annotated bibliography with many further references that look valuable for deeper study. The majority of the report is used to describe 8 case studies. I should return to this to read them in further detail, but one observation I had was that (at least when this report was written) most of the assessments described were indistinguishable from traditional assessments (test, essays graded by rubric, etc.).


Lorenzo, George. “Western Governors University: How competency-based distance education has come of age.” Educational Pathways 6.7 (2007): 1-4.

This article reports on the success of Western Governors University (WGU) an institution which is entirely distance learning and competency based. Here competency based is stated as meaning that students earn their degrees by passing assessments rather than completing credit hours. WGU seems to target adult learners (the reported average student age is 38) and students who wouldn’t be well served by a traditional residential university education. The types of assessments used include objective tests (I’m not sure what this means), performance tests, portfolios and projects. As an example, for teaching licensure students must perform 12 weeks of live teaching under observation as one of their assessments. There is also an attempt to use industry standard certifications when they available (this is often the case in IT and health professions). This article paints a very promising picture of WGU (although I also get the vibe that it is written by WGU’s publicity department) and its use of CBE. One question I am left with is how effectively CBE can be implemented outside the program areas offered by WGU (business, IT, health and teaching). See also the WGU program guidebook, annotated below.


Malan, S. P. T. “The ‘new paradigm’ of outcomes-based education in perspective.” Journal of Family Ecology and Consumer Sciences/Tydskrif vir Gesinsekologie en Verbruikerswetenskappe 28.1 (2000).

This paper reviews the roots of OBE and attempts to put recent efforts in perspective. OBE is an approach to education where the focus is on successful demonstrations of learning sought: where the “what” and “whether” of learning is more important than the “when” and “how.” OBE dates back to the middle ages (craft guilds). CBE was big in the US in the 60s, and was based on six components: explicit learning outcomes, flexible time frame, varied teaching activities, criterion-referenced assessment, certification based demonstrations, and adaptable programs. The author argues that calling OBE a paradigm shift is overselling it, as there is insufficient research base to verify the claims of OBE, and that OBE is not fundamentally different paradigmatically from traditional educational approaches. OBE is discussed as a transformational (rather than transmissive) approach. OBE uses performance-based and authentic assessment strategies within the context of criterion-referenced assessment, which must integrate knowledge, skills and values.


McClarty, Katie Larsen, and Matthew N. Gaertner. “Measuring Mastery: Best Practices for Assessment in Competency-Based Education. AEI Series on Competency-Based Higher Education.” American Enterprise Institute for Public Policy Research (2015).

This report examines assessment in CBE, and sets a number of recommendations. The authors argue that it is critical to validate the assessment instrument used, that meaningful competency thresholds must be set which are based on multiple sources of evidence, and that assessment design should be drive by external validity (i.e., will employers care how students performed on these assessments?). These points are important because CBE cannot be successful without effective competency assessments. The article covers many specifics of the steps that should be taken in designing and validating an assessment.


Miller, Gregory E. “The assessment of clinical skills/competence/performance.” Academic medicine 65.9 (1990): S63-7.

This is a (quite well written) review of assessments used in medical education. The definitions used don’t map perfectly to many other CBE references in this bibliography, and the review distinguishes between knowledge, competency (meaning “applicable knowledge”), performance (meaning “demonstration in an artificial environment”), and action (what I would normally think of as performance, i.e., “doing the task in a real environment”). The review describes the different assessments developed and in use (at the time of writing) to infer all four of these characteristics of physicians in training. It is impressive how ahead of the curve the assessments described seem, given that this review is from 1990. The review also includes a number of insights gleaned from the field and the author’s experience: summative testing must be quite long to be accurate and ranking students is less important than determining if they’ve achieved a cut-off level of mastery. It is interesting to see that the author doesn’t know where professionalism should fit into medical education, an issue covered in depth in some of the other references in this bibliography (which had the benefit of being written almost two decades later).


Mueller, Paul S. “Incorporating professionalism into medical education: the Mayo Clinic experience.” The Keio journal of medicine 58.3 (2009): 133-143. APA

This paper discusses professionalism as a core physician competency. The paper claims that professionalism can be taught, learned and assessed (as opposed to being an intrinsic personal characteristic) and it is critical to do so as part of the education of certified medical professionals. As they argue that assessment of professionalism is critical, the authors also discuss methods for formative and summative assessment.


Scalese, Ross J., Vivian T. Obeso, and S. Barry Issenberg. “Simulation technology for skills training and competency assessment in medical education.” Journal of general internal medicine 23.1 (2008): 46-49.

This paper discusses the use of simulations in medical education and assessment, which has become more common partially due to a larger focus recently on competencies. They look at three types of simulations in particular: part task trainers, computer-enhanced mannequins, and virtual reality simulators. These technologies can help faculty save time and give students educational opportunities without exposing patients to novice practitioners. They also allow a proactive rather than ad hoc educational process (do whatever procedure needs doing at the time). In addition to benefits associated with teaching, simulations offer strong opportunities for assessment of competency.


Voorhees, Richard A. “Competency‐Based learning models: A necessary future.” New directions for institutional research 2001.110 (2001): 5-13.

This paper (chapter?) discusses the basic ideas of CBL as well as their potential to be implemented in higher education. Includes the popular pyramid diagram describing the relationships between competencies and other pieces in the conceptual learning model of CBE. The chapter discusses bundling and unbundling of competencies for different contexts. Competencies should be transparent, unambiguous and measurable. I should probably see if I can get a copy of the book from the library to do a more in depth reading.


Western Governors University. “Program Guidebook: Post-baccalaureate Teacher Preparation, Social Science (5-12).” (2015): 1-14.

This is a program guidebook for one of the teaching programs at WGU (see the article by Lorenzo above for details on WGU), which includes useful details and specifics about how programs are run at WGU. In addition to an explanation of what CBE means (in terms of relevance to the student), the guidebook also includes information on their mentoring approach, and how to connect with other students (since all coursework is completed online). The guidebook also covers the required assessments in detail, including supervised teaching demonstrations and a professional portfolio.

Through the process of writing my annotated bibliography, I learned a lot more about the definitions and frameworks associated with CBE, going beyond the basic idea that CBE is an alternative to seat time requirements. While not all definitions in the literature are 100% consistent, I think most of the authors I read would agree that a competency is the capability of applying knowledge, skills and abilities to the successful performance of a task. I also learned about competency models, which provide a framework for how different individual competencies can be bundled together, leading to successfully fulfilling a particular role or job (similar to how a single competency might bundle knowledge and skills in order to successfully perform a single task within that job). If one takes this approach to understanding what makes someone competent (or incompetent) at their job, it makes sense to design courses and educational opportunities so that they lead to their students acquiring valuable competencies.

Many of the references covered how to develop curriculum which uses CBE or how to implement CBE principles at an existing educational institution. Competencies associated with the course should be identified, and aligned with the learning objectives and course activities. Additionally, an assessment (or set of assessments) needs to be selected that accurately gauges the competency level of the student. Additionally, a level of achievement associated with successful acquisition of the competencies should be identified. Many references also covered more specific examples within a particular area of study, because while this general list of steps is simple, identifying competencies and developing assessments are very field specific and can be very involved.

I learned about some specific implementations of CBE in higher education, and while I found resources describing a number of different case studies, the case which is represented in my bibliography is Western Governors University (WGU). WGU seems to be successfully implementing CBE programs in business, IT, health and teaching, and mainly targets older students. CBE seems to match well with these fields (which have specific tasks that employees should be able to perform, somewhat in contrast to getting a degree in philosophy, for example), and there are many authentic performance assessments available that could demonstrate with confidence that a student has the necessary competencies without some seat time requirement. For older students, this model also seems very advantageous, as it avoids a full 4 years being required for a new degree when two or more of those years might be review of material they already learned through previous schooling or employment. WGU shows that CBE has promise in higher education, although there are definitely open questions for how the success at WGU would translate to more traditional institutions (e.g., residential colleges that primarily enroll young students).

It was very interesting to learn about the adoption of CBE in medical education, which many of the references I annotated were about. Medical education began to integrate aspects of CBE relatively early on, with the Miller review describing a well-established battery of CBE assessments already implemented and researched in 1990. This makes sense, because many of the tasks expected of a physician can be broken relatively cleanly into different competencies and the outcomes of their tasks are unambiguous and critically important (e.g., if a patient dies, that is a poor outcome potentially indicating a lack of competency). Somewhat unexpectedly however, one of the key features of CBE seems absent from medical education: there is still heavy reliance on seat time requirements. Medical school lasts at least four years for all M.D. students, and each residency program has a set number of years that must be completed before a physician can practice independently. It seems that some elements of CBE have been adopted in order to improve educational outcomes and alert institutions to individuals who need more training on a particular procedure, but the pieces which would accelerate the certification of doctors who don’t require this length of training are missing. For the critical competencies associated with medical doctors, it seems the powers that be still trust seat time requirements more than competency based performance assessments.

Besides the more factual information I learned, I came away with two main impressions of CBE. One of the impressions is that it seems very easy to come up with a curriculum in a CBE wrapper (i.e., identify desired competencies, choose activities aligned with those competencies, etc.) that is essentially identical to traditional education, just with an exit assessment rather than seat time requirements. While this might still sound appealing, in these cases (which I think are most commonly the application of CBE to K-12 education) it seems the assessments are often just essays that are graded with rubrics designed with competencies in mind. For most K-12 topics (with possible exceptions in areas that we’re already good at assessing in a very quantitative way, like math), I think CBE is still far from being able to claim the victories that have been achieved at small scale in higher education.

The other impression I got is that good assessments are key to the successful implementation of CBE, and developing good assessments can be very difficult. Related to the previous point about some CBE being traditional education in a different wrapper, if you have high confidence in the assessment being accurate, then the CBE class really starts to look more novel. However, I personally don’t think that essays graded with rubrics are a very effective way of assessing student competencies, although more research into the area showing their success could change my mind. The more exciting prospect to me is the development of assessments that begin to look more authentic and performance based, which I think represent a true departure from traditional educational structures. Of course, much further work would be needed to develop such assessments, and it’s not clear to me who could provide the resources for that development.

Overall I still think the basic ideas of CBE are compelling, and it makes sense to move towards that direction in K-16 as well as other areas of education. Even so, I think there is a lot of work that needs to happen before CBE will be more effective (and have more public trust) than traditional seat time education, especially in terms of assessment. I’m also wary of some attempts to implement CBE, which don’t seem to be that different from the systems they would replace, although they could still be helpful in incrementally developing curriculum, activities and assessments, so maybe there is little harm in those attempts. I’m excited to see what happens with CBE in the future, but I’m not convinced that if it becomes adopted in name, it will truly represent a shift in educational philosophy.