The problem you describe sounds like two Markov chains (or discrete-time Markov processes). Each number (call them x and y) occupy a certain state and each step of the process they have a probability m to move to a different state (with each new state being equally likely).
This problem is actually fairly straightforward: the stationary distribution for the process x (or y) follows is simply that x is equally likely to be in any of the given states, so P(x = k) = 1/a. Because the initial value for x is also a uniform random distribution, if follows that x is already in the stationary distribution from the start.
It may sound strange, but the value of m has no influence on this. For example, try this code:
StationaryDistribution[
DiscreteMarkovProcess[
{1,0,0,0},
{
{1 - m, m/3, m/3, m/3},
{m/3, 1 - m, m/3, m/3},
{m/3, m/3, 1 - m, m/3},
{m/3, m/3, m/3, 1 - m}
}
]
]
The matrix is the transition matrix of the Markov process (for 4 states in this case). Each entry M_ij is the probability that the next state is j if the current state is i (note that the rows of M sum to 1). As you can see by executing the code, the stationary distribution does not depend on m, but is simply 1/4 for every state. This knowledge should make your problem fairly easy to do, I think.
Of course, if the transition matrix is different (for example, not every state is equally likely to be visited), then the story becomes different. I suggest you read up on Markov chains and processes if you want to know more.