Is there a more efficient way to import a large number of files?

Posted 5 months ago
847 Views
|
7 Replies
|
3 Total Likes
|
 I'm working on an image classification machine learning project and have a somewhat large (~800 MiB) training data set. The data set is composed of individual PNG image files organized in directories. Upon trying to import them all loadedData = ParallelMap[Import, dataFiles] It takes an extremely long time (10+ minutes), and ends up using 21 GiB of memory. Obviously, something seems to be wrong. Is there a more efficient or more correct way to be loading all these images?
7 Replies
Sort By:
Posted 5 months ago
 Try: loadedData = ParallelMap[Import[#, IncludeMetaInformation -> None] &, dataFiles] 
Posted 5 months ago
 This appears not to have made any (or much of a) difference. I'm not on a machine capable of loading 21 GiB of data at the moment, but it's still using at least 16 GiB before failing. I'm bewildered as to where all of this apparent data is coming from.
Posted 5 months ago
 Also, if you want to train a neural net with those images you can perform an out-of-core training. See this example: http://www.wolfram.com/language/11/neural-networks/out-of-core-image-classification.html?product=language
Posted 5 months ago
 I'm aware of out-of-core training, but I had expected that it wouldn't be necessary for the size of my data.
Posted 5 months ago
 PNG is usually (always?) compressed. When its imported into the WL, the size increases (sometimes quite dramatically), as its stored as an uncompressed array in memory. This is one major advantage of out-of-core training: you can store all your images in a compressed format on disk and only have small batches of uncompressed images in memory at any one time.One last thing: JPG is a better format for out-of-core learning than PNG, as the Image NetEncoder is faster for this format.