This is a general limitation of any neural network. Retraining a neural network with new data will always make it forget existing ones. There are techniques to workaround this with some degree of success. One way is to include the new images with some samples from the existing dataset and retrain the network. But I doubt it will work in case of image reconstruction since new data will confuse the network.
There are also some research that demonstrate deep networks eventually learn to generalize much later past the overfitting stage, when trained for several thousands of rounds. Some reasons cited are that the network finds the global minimum of cost function and the network learns the most efficient representation and transformation functions. (https://arxiv.org/abs/2201.02177)
In theory, it should be possible to design a network that learns to do DCT or Wavelets or something even better internally to compress and reconstruct any generic image on the fly, after it is trained for several rounds, but this not has been studied yet.