ELO is a player rating system that was first used to rank chess players, but later found a lot of usage in video games, basketball, baseball and many other sports. It’s designed in a way that when underdogs win in a game, they are given more “credit” for their win than if a favourite was to win. For example if chess grandmaster Magnus Carlsen was to beat me (a non ranked player) in chess, it really shouldn’t affect his ranking. And vice versa should someone with a low rating beat the number 1 player in the world, their ranking should shoot up.

ELO is also designed in a way that means a top player can’t just continually play much lower ranked players and keep moving up the rankings at the same rate. It doesn’t mean that matchups don’t exist where a certain “style” of play (depending on the sport/game) means it’s easier or harder than their ELO lets on, but it’s meant to reward underdogs winning a whole lot more than someone picking favourable matchups.

ELO seems like it should be some massively complicated system, but it can really be boiled down to two “simple” equations.

### Expectation To Win Equation

The “Expectation To Win” equation is really part of the ELO equation, but it’s worth splitting out because you often may want to see it as a stand alone number anyway.

static double ExpectationToWin(int playerOneRating, int playerTwoRating) { return 1 / (1 + Math.Pow(10, (playerTwoRating - playerOneRating) / 400.0)); }

After passing in two player ratings, we are returned the likelihood of Player 1 winning the match up. This is represented as a decimal (So a result of 0.5 would mean that Player 1 has a 50% chance to win the match up).

Some example input and output would look like :

Player 1 Rating : 1500

Player 2 Rating : 1500

Expectation For Player 1 Win : 0.5 (50% chance to win)

Player 1 Rating : 1700

Player 2 Rating : 1300

Expectation For Player 1 Win : 0.90 (90% chance to win)

### ELO Rating Equation

The next step is, given two players compete against each other with a clear winner and loser, what would their new ELO ranking be? The equation for that looks a little bit like this :

enum GameOutcome { Win = 1, Loss = 0 } static void CalculateELO(ref int playerOneRating, ref int playerTwoRating, GameOutcome outcome) { int eloK = 32; int delta = (int)(eloK * ((int)outcome - ExpectationToWin(playerOneRating, playerTwoRating))); playerOneRating += delta; playerTwoRating -= delta; }

Let’s break this down a little.

We pass in our two players ratings, and the outcome as seen by player 1. Once inside, we use a special number called “K”, we’ll talk a little more about this number later, but for now just think of it as a constant. We then take the outcome (Either 1 or 0), and minus out the actual expected outcome of the game. We then add this delta to player one’s rating, and subtract it away from player two. Because we are using the expected outcome as part of the equation, we can reward the underdog for winning more than if the expected winner actually wins. Let’s look at some actual examples :

Player 1 ELO : 1700

Player 2 ELO : 1300

Outcome : Player 1 **Wins**

Player 1 New ELO : 1702

Player 2 New ELO : 1298

ELO ShiftÂ : 2

Player 1 ELO : 1700

Player 2 ELO : 1300

Outcome : Player 1 **Loses**

Player 1 New ELO : 1671

Player 2 New ELO : 1329

ELO ShiftÂ : 29

So as we can see, when player 1 wins, they gain only 2 ELO points because they were expected to come out on top (Infact they were expected to win 90% of the time). However when they lose, they pass along 29 points to the loser which is a huge shift. This represents the underdog winning a game they were not expected to win at all.

### Finding The Perfect “K”

So in our calculate method, we use a constant called “K”. In simple terms, we can think of this number of how “fast” ELO will change hands. A low number (Such as 10) will mean that ELO will fluctuate rapidly to results. A high ELO (Such as 32) will mean that ELO rankings will be much slower moving. Typically this means that for sports/games that have a low amount of games played per season (Or per career), you may want a rather low number so that results change pretty rapidly. In sports where there are possibly even hundreds of games a year, you would want a lower ELO to reflect that you are expected to lose a few games here or there, but overall you would have to go on a large losing run to really start slipping.

Other times K can change based on either the ELO rating of the players, or the amount of games already played. For example in a new “season”, with few games played, a high K rating would mean that ELO rankings would fluctuate wildly, and then as the season goes on you could lower K to stabilize the rankings. I’m not a huge fan of this as it begins to put more importance on winning games early in the season rather than the end, but it does make sense for new players in a game to be given a lower K so they can find their true ELO faster.

If you take the varying of K based on the ELO rating of the players, you can give a high ELO to lower/mid range ranked players so that they can dig themselves out of the weeds rather fast. Then lower K as you reach higher ELO’s to reflect that at the top of the rankings, things should be a bit more stabilized.

Ultimately, K should be between 10 and 32. And it will totally depend on what you are rating players in to what it should be.

Well explained thank you