Quick question, did the randomness have the bias of synergy between the champions? Because at least 80% of my champ selects have a brief discussion on having a balanced team (Ad with AP and tank or support that enables the ADC more etc). This happens a lot more than just picking overpowered champions.
Prate_k wrote:
Quick question, did the randomness have the bias of synergy between the champions? Because at least 80% of my champ selects have a brief discussion on having a balanced team (Ad with AP and tank or support that enables the ADC more etc). This happens a lot more than just picking overpowered champions.
The champions were not an in depth representation of their real game equivalents, they were merely provided stats equivalent to either a normal baseline or an insane amount to represent "too strong", the point is that given a structural system whose only goal is to keep you at a 50% winrate, even with ridiculous stats the overpowered champions still end around the 50% winrate still
You need to log in before commenting.
Ok, so the APP I designed has a pool of 40 players ( Just lines of code ) and a "Champion Selection" of 15, with 5 being obviously overpowered ( Again just lines of code, they're given triple stats of the other "Champions" ).
The first 500 "Games" are just cumulative random rolls of champion selection where each "Player" develops a baseline of player winrate and champion winrate. After the first 500 rolls, it reads the results of the previous 500 games, giving higher probability of playing champions they played that had higher winrate ( I had to rewrite at this point because everybody wanted to play those same 5 champions ).
The first rewrite happened because everybody overwhelmingly chose the overpowered champions and it became a 5v5 statistic gather tool. In the next rewrite I had to create an "At most" limit of only 3 ( Sometims a team had 3 overpowered, sometimes 1 or 2 ) overpowered champions per team and the rest would go to other "Champions" that were not overpowered. This made the results much more interesting and after rerunning it for 500 "Games" again for new results with rewrite.
So now for matchmaking, each "Player" had a baseline "Winrate" which I called "Skill", I had to split the coding down two paths for matchmaking .....
The first code ( Applies Riot's matchmaking goal of keeping everybody as close to 50% as possible ), which led to sometimes hilarious matches of "players" with sub 40% winrate "Fighting" players with 70%+ winrate at times. This ended up ....pretty close to a 50% winrate over 1000 games for each player, the widest end percentage was 54% overall ( but had 88% winrate on one of the champions )
The second code gave each player rating based on their initial 500 games and broke them up into tiers ( 10 tiers total ), and at this point I had to widen my player pool and the "Initial" games pool .... I had to increase the player pool to 1000 to break them into tiers and match them up and I raised the initial limit to 8000 games ( This took forever to get through ) . After the rewrite, and again the initial games I got through 5000 games ( of both matchmaking codes, but no major changes happened to first code ). In the second code however, the overpowered champions, were getting "Server" wide winrates of 80% or more, meanwhile on the first code their winrates were remaining at around 50% no matter how many games were played.
tl;dr ;
If you code matchmaking to keep every player at 50% winrate or close to it as possible, you will ALSO keep each "Champion" at roughly 50% as well even though a third are obviously too strong.
Greatest deviations in champion winrate from first code - 46% and 53%
Greatest deviations in champion winrate from second code - 11% and 88%
Also, duh ... the team with more overpowered champions won nearly everytime, in both iterations
Until riot fixes how they do matchmaking, balancing the champions from winrate is like trying to pin down a cloud, nail jello to the wall ....etc