Hi Steve,
Thanks for your interest. No, I'm sorry I don't have code for training a RetinaNet.
In the DataConverters folder there is some code for reading and parsing COCO files.
The Experimental folder has got code that would be useful. For example there is code to implement the Alpha weighted Focus loss in RetinaNet. And I do have other neural nets that use a masked based loss backpropogation that you would need to ensure that only the loss for detected objects are backpropogated through the bounding box regression layer.
There is other code in Experimental for other neural nets which does things like map bounding boxes back onto an array, which you would need to build the targets for the neural net.
The code in the Experimental folders is a little bit scruffy (I'd be happy to tidy some of it up if of interest). So I think there is all the functionality that would be needed, but it would need to be all assembled together and made to fit the target output specifically for RetinaNet. I am sure it can be done, but it is not a completely trivial exercise.
Thinking about it, I suspect that the really smart way of doing it would be not to try and train all the different multiple resolution structures, but just to train straight of the 2 output ports, ie the Classes and Boxes ports. It seems too easy, but I can't think of a reason at the moment why it would not work. That would save a lot of trouble of matching the retina net structure. Might also have the benefit that that methodology could be portable to training other object detection nets. You'd still need to write that masking loss function (but again that could be portable across net architectures).
I think I started writing this thinking it is a significant amount of work, but if it is possible to ignore all the internal structure, then as a bonus that approach would work for all the nets. Interesting thought.
There is still the issue of choosing sensible learning rate and learning schedules, which can be quite a tricky area.
I'd be very happy to help if you'd like to take it further. Do you have specific application areas in mind?
With kind regards, Julian.