Here's something that works perfectly for me. For me it runs much faster in the Cloud than on the desktop, 3 seconds vs. 30 seconds, and clearing caches made little difference.
displayColumns = {"Path", "FileByteCount", "FileType", "LastAccessed"};
cloudObjectsDS = Dataset[(First[CloudObjectInformation[#]]) & /@ CloudObjects[]];
sortedAndFiltered = (SortBy[cloudObjectsDS, (#["FileByteCount"]) &] // Reverse)[[All, displayColumns]]
The steps are:
- Use CloudObjects[] to get a list of all your cloud objects; there are also options to filter by particular types.
- Map over the list with CloudObjectInformation. The first part of each item in the result list is the association containing the data we're interested in.
- Sort by the file byte count, reversing to show the largest first, and then filtering to show the columns we're most interested in; your choices here might be different of course.
A more verbose version, breaking the process down more explicitly into the sub-steps:
cloudObjects = CloudObjects[];
displayColumns = {"Path", "FileByteCount", "FileType", "LastAccessed"};
cloudObjectsDS = Dataset[(First[CloudObjectInformation[#]]) & /@ cloudObjects];
sortedByFileSize = SortBy[cloudObjectsDS, (#["FileByteCount"]) &] // Reverse;
sortedAndFiltered = sortedByFileSize[[All, displayColumns]]
Note that we've retrieved all the information for the cloud objects, filtering only at the last step; if we're really only interested in the file size, we might be able to shave off a bit of time and memory use by grabbing only the relevant columns, but I didn't see much of a difference when experimenting with that.
Edit: minor code change to avoid scrolling