Dismiss Notice
Wynncraft, the Minecraft MMORPG. Play it now on your Minecraft client at (IP): play.wynncraft.com. No mods required! Click here for more info...

Guide Forgery Mythic Odds, Maths, and Madness: One Man's Quest to Reclaim His Clout

Discussion in 'Wynncraft' started by Sockmower, Jul 13, 2023.

?

did you read this post (don't lie)

  1. none of it

    10 vote(s)
    41.7%
  2. just the non-spoilered bit

    1 vote(s)
    4.2%
  3. all of it (i am cool)

    6 vote(s)
    25.0%
  4. i skimmed...

    7 vote(s)
    29.2%
Thread Status:
Not open for further replies.
  1. Sockmower

    Sockmower Be excellent to each other CMD CHAMPION

    Messages:
    183
    Likes Received:
    755
    Trophy Points:
    75
    Minecraft:
    A while ago I made a thread about your chances of getting a Mythic from the Forgery Chest. It did pretty well (as I hope all my threads do) in terms of likes and reaction, and people genuinely seemed to find it helpful. Unfortunately, it wasn't quite the perfect guide because of the fact that shortly after, the base mythic chance was buffed to 1.5%; so you can find the updated odds in this guide, below. Some people also didn't like it because I "cheated" by using simulation to determine the odds. WELL NOT TODAY BUCKOS. STRAP YOURSELVES IN FOR THE WILDEST RIDE IN THE WYNNDERNESS as I tell you the story of One Man's Quest to Reclaim His Clout.

    But first, for those with short attention spans, or who aren't that interested in my mathematical exploits, here are the updated Graphs and Stats(tm):
    • The expected number of runs before you *should* get a mythic is ~37.5. If you're curious on how your total cumulative chance of getting a mythic is for any given streak, here's the graph:
      • upload_2023-7-13_14-52-37.png
      • We can see that this looks about right, with the chance tapering off as you get toward 100% because of how unlikely it is you would get there in the first place.
    • And here's the expected number of mythics you would get by run count (if you enjoy running the forgery, maybe don't look...):
      • upload_2023-7-13_14-42-42.png
      • We can see the pity has a small effect for the first 100 or so runs, then stabilises as one would expect due to law of averages. Here's the original, non-pity EV for comparison:
      • upload_2023-7-13_14-51-18.png
      • We can see that it is completely linear, and that the overall EV is lower.
    Anyway, I hope that was interesting. If it wasn't interesting enough, keep reading...

    Drop a like and subscribe before you descend into despair with:

    SO. On my original post I got flamed a little for "cheating" by using simulation instead of doing it by maths. Well if you were one of those people, STRAP IN. We're about to go wild.

    The first thing which I wanted to determine was the cumulative percentage chance you had to pull at least one mythic for a dry streak of length n. We can determine this pretty easily by noting the following:
    • The odds of getting at least one mythic are equivalent to (1 - [the odds of getting no mythics]).
    • We can easily determine the total chance of getting no mythics by multiplying together the individual chance for each pull, since it's equivalent to getting no mythic n times in a row.
    • So we simply subtract one from each chance to get a mythic and multiply together to get the odds of getting no mythics, then subtract from one in order to get the cumulative odds for at least one mythic.
    • For instance, P(3) = 1 - ((1 - 0.015) * (1 - (0.015 * 1.01) * (1 - (0.015 * 1.01 * 1.01)))) ~= 0.0297
    • We can generalise this pretty easily into the following formula:
    • upload_2023-7-13_11-38-58.png
    • Perfect! I can no longer be flamed for using "cheating" and "simulation".
    But I wanted to go further. I wanted to calculate the exact expected value for any given depth. To do so, we can define E(x) as a big combinatorial expansion. For example, for E(2), consider all the possible combinations for two pulls from the chest:
    • That is to say, {[No Mythic, No Mythic], [Mythic, No Mythic], [No Mythic, Mythic], [Mythic, Mythic]}
    • We calculate the odds of each branch occurring and then multiply by the value of that branch to get the Expected Value
    • We can then sum these EVs to get the overall EV of that depth.
    • First, calculate the odds:
      • [No Mythic, No Mythic] = (1-.015) * (1-(.015*1.01)) = 0.97007725
      • [Mythic, No Mythic] = .015 * (1 - .015) = 0.014775
      • [No Mythic, Mythic] = (1 - .015) * (.015 * 1.01) = 0.01492275
      • [Mythic, Mythic] = .015 * .015 = .000225
    • Then, we multiply by the value of that branch (number of mythics) to get the EV of that branch:
      • 0 * 0.97007725 = 0
      • 1 * 0.014775 = 0.014775
      • 1 * 0.01492275 = 0.01492275
      • 2 * .000225 = 0.00045
    • And finally sum them up to get the overall EV of this operation:
      • 0.014775 + 0.01492275 + 0.00045 = 0.03014775
    • Easy! So, after two pulls from the Forgery Chest, we would expect, on average, 0.03014775 mythics.
    Now, those of you who are familiar with this kind of graph-based problem might already be seeing the issue here. We have to calculate 2^n different combinations to calculate E(n). This is where we kind of start going off the rails a bit.

    Let's first reimagine our problem as an unordered binary tree, where each node has a value (the Expected Value of this node). We can calculate the EV of a node "as we go" recursing down the tree as the sum of the "get" branch of this node, multiplied by the probability of reaching that node, and the "don't get" branch, multiplied by the one minus the probability of reaching the get node. For those whose brains work better with code, here's an implementation in python:
    upload_2023-7-13_14-9-48.png
    where cumulative(n) = E(n)

    Here's a visualisation of the calculation tree:
    upload_2023-7-13_13-42-11.png

    Now some of you might be thinking "Sock, you idiot, this is just the last algorithm but rewritten, it has pretty much the same time complexity". And you would be WRONG. Well actually you would be RIGHT, but doing things this way lets us notice something that we didn't before. Those of you who looked closely at the previous image might have noticed something. Last chance to try and find it for yourself before I tell you below:

    Many of the branches in the tree are actually identical. I've outlined a few below:
    upload_2023-7-13_13-47-24.png

    Since the EV of a node is purely a function of the remaining depth in the search (rdepth) and the current dry streak (dry) we can cache every node we calculate and save recursing everything a TONNE. Here's the same calculation tree visualised with this "caching". We only have to calculate each node once because of the fact that everything below it is cached and we can just reference that value:
    upload_2023-7-13_13-42-32.png

    This lets us calculate the EV to a given depth *really* quickly. Our naive approach with the tree was O(2^n - 1), whereas this memoization approach is just O(n(n+1)/2).

    Oh, what's that? You want a proof of the time complexity? Get ready for a twist to rival that of the shapeland video: You ALREADY SAW MY PROOF. IT'S THAT VISUAL PROOF RIGHT UP THERE. That's right, n(n+1)/2 is the sequence for TRIANGULAR NUMBERS. And that structure you see up there? IT IS A TRIANGLE. CHECKMATE, NERDS. I can already hear them seething and coping about "mimimimi visual proof not proof" and I frankly DO NOT CARE. Prove it yourself.

    Anyway, in conclusion, I win. I'm cool. I'm based. You cannot say I cheated any longer.

    Well, bet you didn't expect an interesting(?) lesson in graph theory from your casual wynncraft forums browsing. I sure hope you enjoyed, because I didn't. But I think I have regained my clout and can no longer be called a mathematical cheat.

    This was a very... interesting project. Please like and subscribe if you enjoy vaguely-coherent maths-related rants.
     
  2. Hams

    Hams Content Team Manager CT Manager Support Team Community Manager Builder

    Messages:
    322
    Likes Received:
    2,578
    Trophy Points:
    111
    Creator Karma:
    Minecraft:
    That's some Maths!
     
    Bixlo likes this.
  3. Aucorg

    Aucorg useless meat bag VIP

    Messages:
    100
    Likes Received:
    532
    Trophy Points:
    61
    Minecraft:
    Sockmower...? I hardly know 'er !
     
    That_Chudley and Sockmower like this.
  4. Volt7

    Volt7 volt seven HERO

    Messages:
    18
    Likes Received:
    56
    Trophy Points:
    48
    Guild:
    Minecraft:
    video game
     
    luckeyLuuk and cmosier like this.
  5. DrGREEN

    DrGREEN wynncraft.wiki.gg is where its AT CHAMPION

    Messages:
    987
    Likes Received:
    1,012
    Trophy Points:
    125
    Minecraft:
    Just a similar question but I felt like asking it: do mythics still have 0.0007% chance of dropping from a mob/chest?
     
  6. TwageTomato

    TwageTomato Coder | Like-Giver | Tomato | Musician CHAMPION

    Messages:
    1,018
    Likes Received:
    881
    Trophy Points:
    130
    Minecraft:
    For the “written as Python code” bit, I’m fairly certain you could rewrite that in a manner that calculates in linear runtime, if not less. Such an algorithm wouldn’t construct the tree of probabilities for reaching a specific set of (get, didn’t get), but would simply spit out directly the odds of finding X mythics by the time you reach Y many runs.

    as for what that algorithm would actually be, that question is above my current mathematical pay grade.
     
  7. Alex1

    Alex1 Alex HERO

    Messages:
    209
    Likes Received:
    510
    Trophy Points:
    67
    Guild:
    Minecraft:
    This post is awesome! This approach to solving the problem is incredibly interesting, and I love how you were able to explain your method in an intuitive manner.

    As someone who commented on your first post, I really wasn't trying to be offensive or hurtful, I was just providing another solution to the problem. We both came to the same solution, just using different methods. No method is strictly better or worse than another, but it's an opportunity to learn about how approaches to solving the same problem can be learning opportunities.

    And in the end, if my comments inspired you to explore the problem deeper and come to this awesome solution, then I'm glad we were both able to learn something new from this.

    With that being said, I updated my previous solution to match the new base rate, and got the same result, so I agree with your solution! This is my mathematical solution as a formula, which is solvable in O(1) time with respect to "depth".

    Without the pity rate, this is a relatively straight forward problem, as the binomial distribution has a closed form solution and is one of the basic statistical distributions that students learn in statistics.

    However, the binomial distribution's major assumption is that the chance of "success" is constant with the number of trials (independence between trials). Adding the pity rate removes the validity of this assumption.

    The Poisson-Binomial distribution is the correction to the binomial distribution that allows for changing probabilities of success. In our case, the probability of success increases each time you fail (no mythic), then resets back to the base value when you succeed (get a mythic)

    We can represent this using a Markov chain:

    [​IMG]

    where we assign a unique "state" to each possible number of runs since receiving a mythic. (NOTE, I didn't update this graph from my previous post, so the 1.2% should be 1.5%, meaning all of the percentages are off, but the concept is still the same)

    This is where our paths to the solution diverge. One approach is to look at all possible "paths" that we can take through this Markov chain (or adjacency matrix) taking 2 steps starting from 0.

    [No Mythic, No Mythic] == state 2
    [Mythic, No Mythic] == state 1
    [No Mythic, Mythic] == state 0
    [Mythic, Mythic] == state 0

    Then, you could track the expected value of each state (which is what you did), and sum them up to get the expected number of mythics.

    However, my approach using Markov chains allows us to do something different. The key insight is that every time you are in state 0, that means you either just started or you just got a mythic. So, we can create a probability density function (PDF) that represents the probability of being in state 0 at every run (meaning you just got a mythic), then, take the integral (cumsum) to get a cumulative density function (CDF) that represents the number of expected mythics at every run.

    So now our problem becomes finding the probability of being in state 0 after "n" number of runs. This is where the magic of Markov chains come in. The entire point of a Markov chain is that it is represented as a matrix, M, and the following equation holds:

    P = M^n * z

    Where M is the Markov chain, n is the number of runs, z is the initial state, and P is the probability vector of all states after n runs.

    Let's see what this looks like for a small number of n, but before that, this is how you build the Markov chain:


    Note, copying this from my previous post, the new probability is 1.5 but the process is the same

    Each state represents number of runs since you got a mythic. Every state can go back to state 0 (you can get a mythic at each state) with certain probability (1.2 * 1.01 ^ n). Every state connects to the next state by 1 - (1.2 * 1.01 ^ n), so that all probabilities add up to 1.

    Luckily, turning this into a transition matrix (https://en.wikipedia.org/wiki/Stochastic_matrix) is straight forward, and we get:

    [​IMG]
    Which is actually really easy to create in software, since it's essentially just the (1.2 * 1.01 ^ n) column vector smashed next to a matrix where the diagonal entries are 1 - (1.2 * 1.01 ^ n), or:

    [​IMG]

    (Where p0 is the initial probability, pr is the pity rate). The last thing we need is the total size of the matrix, which is based on the number of runs it would take to get to 100% mythic chance, which happens to be: floor( log(100/po) / log(pr) ), which is 445 in this case (also, OPs claim that it is mathematically impossible to get fewer than 2 mythics doesn't make sense, since there are 444 states before you are guaranteed 100% mythic chance, and only 429 runs....)

    Ok so now we have a 445 x 445 transition matrix, and the whole point of making it is that we can determine the probability distribution of each state (how many runs after getting a mythic you are) after N number of runs. For a sanity check, let's hand calculate the values for 2 runs and see if it matches what we get with markov.

    Putting this into code lets us build this matrix with any mythic rate and pity rate:

    ```matlab
    po = 1.5 %initial prob
    pr = 1.01 %pity rate
    numstates = floor( log(100/po) / log(pr) )
    left = po .* pr .^ [0:numstates] ./ 100;
    right = [eye(numstates) ; zeros(1, numstates)];
    markov = [left' ((1-left') .* right)];
    markov(end, 1) = 1;
    ```

    Now with the buffed probabilities, our new Markov chain is a 423x423 matrix, meaning there are 422 possible ways not to get a mythic

    Our initial vector z is just (1 , 0 , 0 , 0 ,0 ..... 0)^T, because we start with no mythic and no pity rate. To determine how many more runs you have to do from your current state, you can do z = (0, 0 , 0 ... , number of failed runs, ..., 1, 0 ,0 ,0 ... 0)^T, which is pretty cool. I might package this into a simple calculator app at some point with this in mind.

    Now, we create our PDF. After 1 run we have:

    P_1 = M^1 * z = (.015, .985, 0, 0, 0 ,0 ,0 ....) meaning there is a .985 chance we are in the "1 run streak", and .015 chance we are in the "got mythic state"
    P_2 = M^2 * z = (.015148, .014775, .97008, 0, 0 ,0 ,0 ...) .97008 chance we are in the "2 run streak", .014775 "one run streak", .015148 "got mythic state"

    Now, this is a PDF. To get a CDF, we just take the cumulative sum of P_1 and P_2:

    P_1 + P_2 = (.03014775, .999775, .97007725, 0, 0, 0, 0 ...). Therefore, we have a .03014775 chance to be in the "got mythic state", which is exactly what you calculated with your method!

    The beauty of Markov chain analysis is with its time complexity. Finding the chance you got a mythic on the 15000th run has the same runtime as finding the chance of getting a mythic on the 1st run.

    Technically, the matrix M first has to be diagonalized before it can be efficiently exponentiated. Matrix diagonalization is O(m^3), where m is the size of the matrix. Luckily, for our problem, the size of matrix doesn't change based off the number of runs, actually the size of the matrix never changes.

    However, for M^1 * z, Matlab won't bother diagonalizing M, so that will be faster than M^1000. But, for M^1000, Matlab will do the O(m^3) diagonalization, therefore M^1000 is the same as M^1000000.

    upload_2023-7-14_12-44-44.jpeg
    upload_2023-7-14_12-45-4.jpeg
    Now that we have our PDF in O(1), we take an integral (or just sum in this case) to find the CDF. Therefore, to create the same plot as you did, is actually an O(n) operation, where n is the number of runs.

    Here is the PDF (the bump is pretty cool, you have a better chance of getting a mythic on run 100 rather than run 200, THIS DOES NOT MEAN YOU GET MORE MYTHICS IN 100 RUNS THAN 200 THOUGH)
    upload_2023-7-14_12-45-21.png
    And here is the CDF, which is the more useful plot, and validates your solution!

    upload_2023-7-14_12-45-35.png


    I like my solution, but I did some research and found some other papers on the Poisson-Binomial distribution:

    http://www.columbia.edu/~wt2319/PB.pdf

    https://www.sciencedirect.com/science/article/abs/pii/S0167947312003568

    I haven't looked much into these papers, but there might be something interesting in here. The first one uses Fourier transform methods to solve this problem, which blows my mind.

    EDIT: Thinking about this more, I don't think this mythic drop problem can be solved with this distribution, because while Poisson-Binomial allows for varying probabilities of success between each trial, it doesn't account for getting pity broken once you get a mythic. Therefore, Poisson-Binomial would only work to calculate the expected number of runs until your FIRST mythic
     
    Last edited: Jul 14, 2023
    luckeyLuuk, Dr Zed, hppeng and 4 others like this.
  8. TwageTomato

    TwageTomato Coder | Like-Giver | Tomato | Musician CHAMPION

    Messages:
    1,018
    Likes Received:
    881
    Trophy Points:
    130
    Minecraft:
    Yup, that sounds about right. Thank you for confirming my suspicions about there being a way of solving this in a significantly lower order of runtime complexity :)
     
    Decap likes this.
  9. Decap

    Decap Best dad bod of 2019 VIP+

    Messages:
    36
    Likes Received:
    25
    Trophy Points:
    49
    Minecraft:
    Correct me if I'm wrong but with your approach, if you increase the depth by 10x it runs in the same amount of time vs. taking a 100 times longer with the first approach? Sorry I'm new to Big O notation.
     
  10. Sockmower

    Sockmower Be excellent to each other CMD CHAMPION

    Messages:
    183
    Likes Received:
    755
    Trophy Points:
    75
    Minecraft:
    Haha, yeah, I know. I was just being a lil facetious with the whole narrative for effect. This is a very cool solution to the problem! I considered using markov chains but I felt like it was deliberately obfuscating the meaning of "state" in a markov chain (and i thought that it made for a more visually interesting and transferable post to see how it worked as a tree). also i am not clever enough to understand markov chai-
     
  11. hppeng

    hppeng 0 intel is the correct amount of intel HERO

    Messages:
    1,243
    Likes Received:
    2,007
    Trophy Points:
    153
    Guild:
    Minecraft:
    About the matrix diagonalization, is the matrix size not growing with the number of states considered? Theoretically you would have to consider all (infinitely) many states since you might just.. never get a mythic. Even the matrix multiplication itself (naively) after doing the diagonalization would already be O(n^2) which is the same time complexity.

    (I might be misunderstanding the method but afaik you diagonalize, roll out the exponentiation in constant time, and then multiply to get your solution)

    though maybe you can make some argument about how the matrix is sparse? I haven't really thought about this problem in detail. Very interesting regardless -- wynnbuilder actually has some (old) results on applying similar analysis to mana steal and its cool to see it pop up elsewhere in wynn :)


    EDIT: I am dumb and forget that you only have finitely many states since the probability of getting a mythic can go up to 100%. However up until that point the matrix math should still scale as n^2; Sockmower's DP method will also "cap out" at 100% and only scales as n^2 beforehand.
     
    Last edited: Jul 14, 2023
    FelixTape likes this.
  12. Alex1

    Alex1 Alex HERO

    Messages:
    209
    Likes Received:
    510
    Trophy Points:
    67
    Guild:
    Minecraft:
    Glad this has been fun for the both of this. Love to continue this math rivalry
    The matrix diagonalization runtime complexity isn't relevant to this analysis because the size of the matrix doesn't ever change. The matrix is a 429x429 size matrix no matter what. If you want to find the PDF after 1 run, you diagonalize the matrix once, then exponentiate. If you want to find the PDF after 10000 runs, you diagonalize the same matrix once, then exponentiate. The matrix exponentiation is constant time with respect to everything of interest in this problem
     
  13. hppeng

    hppeng 0 intel is the correct amount of intel HERO

    Messages:
    1,243
    Likes Received:
    2,007
    Trophy Points:
    153
    Guild:
    Minecraft:
    Yeah, if you choose to model the entire problem (in which case the tree is also constant size or can be trivially reduced; it stops at some point for both methods since the probabilities stop changing...)

    up until that point though the size of the matrix is dependent on the maximum number of runs you want to model
     
    Sockmower likes this.
  14. Dr Zed

    Dr Zed Famous Adventurer HERO

    Messages:
    5,164
    Likes Received:
    6,489
    Trophy Points:
    194
    Minecraft:
    Guys this is a Minecraft server; no high level math please. I already have enough PTSD from Calc 3 and linear algebra and these posts are giving me even more /s
     
Thread Status:
Not open for further replies.