The goodness-of-fit is many times in the eye of the beholder. As a statistician I see that the first five values have positive residuals and the last three have negative residuals. That suggests that the model does not fit well (either the proposed curve form doesn't fit or the assumption of additive independent random errors is not supported or both). Also, R-squared is not so meaningful when describing the strength of the relationship of curves forced through the origin. In addition, IÂ’d argue that the point {0,0} is likely NOT a data point as the curve form has to go through the origin.
A better measure of goodness is the root mean square error which is in the same units as the dependent variable. Given the undesirable residuals you might consider a slightly more complex model where the exponent "3" is replaced with a parameter.
data = {{1800, 0.258738807}, {3600, 0.362822881}, {5400,
0.473489839}, {7200, 0.54841429}, {9000, 0.614915391}, {10800,
0.638559079}, {14400, 0.68690743}, {18000, 0.730710491}};
(* Fit the model using the data *)
nlm = NonlinearModelFit[data, 1 - ((1/(1 + (b (x/1000)))^k)), {b, k},
x]
(* Parameter estimates *)
nlm["BestFitParameters"]
(* Root mean square error *)
nlm["EstimatedVariance"]^0.5
(* Show the fit *)
Show[Plot[nlm[x], {x, 0, 18000}, PlotRange -> All], ListPlot[data]]
with output
{b -> 0.225117, k -> 0.818872}
0.0140814

I've also explicitly divided the independent variable by 1,000 as standardizing independent variables (either by subtracting the mean and dividing by the standard deviation or subtracting the minimum and dividing by the range) many times fixes numerical instabilities.