Interesting method I guess. It is different from the one I'll show, but at least the similarities of being iterative and geometrically convergent.
The (reasonably) well-known "power method" can be used to find the largest root. It amounts to finding the largest eigenvalue of the characteristic matrix (constructed to have the original polynomial, or its negative, as its characteristic polynomial). One starts with a random vector and iterates
newvec = mat.oldvec; newvec=newvec/Norm[newvec]
and then takes the ratio of the last iterate vector to the matrix times that vector. Finding the smallest root amounts to using "inverse" iterations, that is, multiplying by the matrix inverse instead of the matrix. (In any practical setting, one would use a solver rather than explicit inversion though.)
Here is an example from the paper.
poly = x^3 - 4*x^2 - 321*x + 20;
We construct the matrix and verify that the characteristic polynomial is as claimed.
mat = {{0, 0, -20}, {1, 0, 321}, {0, 1, 4}};
cp = CharacteristicPolynomial[mat, x]
Out[292]= -20 + 321 x + 4 x^2 - x^3mat = {{0, 0, -20}, {1, 0, 321}, {0, 1, 4}};
For illustrative purposes I construct the inverse directly, then start with a random vector, and do 10 iterations.
invmat = Inverse[mat];
SeedRandom[1234];
x0 = RandomReal[1, 3];
Do[nx = invmat . x0;
x0 = nx/Norm[nx], {10}];
Dividing as described above shows that we have an eigenvalue as the common ratio 9so it converged nicely).
mat . x0/x0
Out[297]= {0.0622577, 0.0622577, 0.0622577}
Now verify it is the smallest root.
In[300]:= NSolve[poly == 0, x]
Out[300]= {{x -> -16.0623}, {x -> 0.0622577}, {x -> 20.}}