I have two suggestions. First, I'd change your classifier strategy. Rather than do a classifier for each heading that amounts to just true/false (effectively), I'd create a single classifier that classifies strings into specific types of headers with a sort of "discard" or "ignore" class. I'd also just coerce everything to lower case (or upper case or whatever normalization you want). I might also trim colons or other decorations. So, a sample of the classification data might look like this:
{"description" -> "Heading:ItemDescription", "product name" -> "Heading:ItemDescription", "nett total" -> "Heading:Subtotal", "sub-total" -> "Heading:Subtotal", "total remittance" -> "Ignore", "invoice total" -> "Ignore"}
Then, rather perform multiple loops each based on one classifier, you can perform the logic for each class found by the one classifier.
Second, to analyze both horizontally and vertically formatted data, I'd suggest that you use the extended form of TextRecognize to get properties for each match. Specifically, get the BoundingBox. This would be for the OCR logic. Something like
TextRecognize[...page image..., "Line", {"Text", "BoundingBox"}]
You'll get a bunch of entries like
{"ITEMS", Rectangle[{591, 1899}, {685, 1944}]}
Now, if you find your target headings lined up horizontally, then you process the invoice items as rows. If you find your target headings lined up vertically, then you process the invoice items as columns. If there are cases where things are arranged some other way, then you'll need to special-case those, I guess. You'll probably need to allow for some fuzziness in the alignment of the rectangle, but basically you test if the centers of the rectangles form a mostly horizontal line or a mostly vertical line.