The arbitrary-precision model represents a number `x`p`

by an ordered pair, a value
$x$ and an uncertainty
$dx>0$. When
$x \ne 0$, the uncertainty appears as the machine-precision number equal to
$p=- \log_{10} |dx/x|$, where
$p$ is called the precision; when
$x = 0$, the uncertainty appears as the machine-precision number equal to
$a=- \log_{10} dx$, where
$a$ is called the accuracy. When
$x = 0$, the `FullForm[]`

looks like `0``a`

instead of `x`p`

. Whether the uncertainty
$dx$ or the precision
$p$ or accuracy
$a$ is stored internally is immaterial to understanding how it works. Perhaps the only thing to realize about the internal representation is that it is probably in terms of a finite-precision binary number, and that's only helpful in explaining things that are usually unimportant, such as why `SetPrecision[0.1`50, Infinity]`

is not exactly `1/10`

and why its denominator is a power of 2.

There are rules for computing how the uncertainties are combined and propagate throughout a computation. *Mathematica* uses these rules to calculate the new uncertainty/precision/accuracy of the result. I don't know for certain what exactly is done, but a first-order approximation is that in computing
$y = F(x_1, x_2,\dots, x_n)$
on the inputs
$x_j$
with uncertainties
$dx_j$, the uncertainty will be approximately

$$dy = \left| \nabla F(x_1, x_2,\dots, x_n) \cdot (dx_1, dx_2,\dots, dx_n) \right|\,,$$
or in terms of a single-variable computation
$y = F(x)$,

$$dy = \left| F'(x)\, dx \right| \,.$$

Another rule is that if
$dy > |y|$, then
$y$ is replaced by
$0$. Finally, it is sometimes helpful to understand that the value for
$x$ carries several extra guard digits beyond the precision. This is so that the roundoff error in computing
$y$ is negligible compared to the uncertainty
$dy$.

As for equality, two pairs
$(x, dx)$ and
$(y, dy)$ are considered equal if the difference
$|x - y|$ is less than their combined uncertainty
$dx + dy$.

Now to the problem in the OP:
In both cases, the value of
$x$ is exactly
$8.5$ or
$17/2$, which is exactly representable in binary. The only difference in `8.5`50`

and `8.5`51.23`

is the uncertainty
$dx$. To understand why
$x=$`8.5`

arises exactly in each table, one should think about the `Table[]`

iterators. All the values constructed are exactly representable in binary with just a few bits, so there is no roundoff error. Further the increment `1/2`

has an uncertainty of `0`

, so that the uncertainties of the all the entries that start with precision `50`

are the same. And if
$x$ increases and
$dx$ stays constant, the precision
$- \log_{10} |dx/x|$ increases.