Transparency with RNG, algorithms, test results
A few months back I recorded 1000 of my b01 ranked games, including land draws as a proportion of cards seen (drawn/scried/milled/whatever was on top of my deck). I calculated the probability of that land draw using a hyper geometric calculator. I assumed the odds of land draws should follow a normal distribution with the mean being a land draw that is proportionate to the lands in the deck.
I found that games were more likely to be .5 standard deviations away from the mean, in each direction, than they were to be close to the mean. About 22% chance to be .5 standard deviations more lands, about 22% chance to be .5 standard deviations fewer lands, about 16% chance to be close to proportionate.
Why would this be a “double humped” distribution rather than a normal distribution?
I have seen, anecdotally, far more extremely low probability events in this game than I can relate, here’s a few:
Having 6 consecutive games on the draw multiple times daily. If odds are 50/50, there is only a 1.56% of this many games on the draw, or on the play, in a row. However, this 1.5 in 100 probability somehow happens multiple times a day.
I have had drafts where I drew 15 of the 17 lands in the deck in my top 30, or fewer, cards every game, effectively robbing me of the ability to participate in the draft or have a chance to compete. For this to happen in a single game has a less than 1 in 10000 chance. The odds of this happening 3 straight is infinitesimally small.
I have seen and received responses from support that your algorithms are working as intended and have been independently verified. However, with such statistically improbable events happening with such high frequency, these responses raise more questions than they answer. It seems either the algorithms are written with exploitative intent and working successfully, or they are written to simulate true randomness but are not working properly, or something was added as a “helper” which is making the game not work with true randomness but is having unintended consequences that result in some frequently bizarre results.
Where is information published around:
- What the algorithms are intended to do
- The independent verification results
It will be hard for many in the community to trust things are working as designed, and without exploitative intent, without more transparency