With large amount of datapoints I found that importing it as binary files is the fastest, probably mainly because it can directly read it from the files and put it directly in the memory.
For using the dat files you can read it in many different ways:
Going from SemanticImport, to Import, to ReadList you go from slow, normal, to fast. But they have their differences with respect to robustness.
In the end it really depends also on WHAT you read. a 2.4GB file is quite hefty, but the data might be much smaller when stored binary...