Hi Marco,
I guess looking at the emails between myself and Wolfram Technical Support that it was StackExchange that accepted it as a bug.
Wolfram Technical Support: "This particular issue has been filed as a report and our development group is working on solving it for a future release. The introduction of this behavior comes from the reworking of CSV importing in Mathematica 11.2 and, as some of your posts mention, the workaround is to not specify {"Data", All} when all of the data needs to be imported."
I replied back that this was a broader issue, asked that it be raised to a higher supervisor, and copied Steven Wolfram on the email chain:
"If you export/import CSV data in Mathematica 11.2, no truncation of rows happens and the TextDelimiters parameter is not needed.
But all CSV files exported in earlier releases of Mathematica, like release 11.1, are subject to hidden row truncation. So all user CSV files storing critical data are at risk.
You can't tell 100,000s of existing Mathematica users, to update every program that imports CSV files created from previous Mathematica releases.
You have to fix this.
The default behavior should be, all existing Mathematica import/export CSV works as it always has, and newer Mathematica CSV functionality can be utilized by adding additional parameters. Not the other way around.
The same is true with the long elapsed time issue.
Please raise this to a higher supervisor.
You have to fix this.
Just some additional comments on this issue:
You should halt all further downloads of Mathematica 11.2 until this issue is fixed in Mathematica 11.3
This issue may apply to other file types, not just CSV files.
You are going to have to test that.
If you have a multimillion dollar financial or drug company that heavily relies on and uses Mathematica for simulations and financial forecasts, they should not rollout Mathematica 11.2 in their production and tests environments until this issue is fixed.
You have no idea what hidden data truncation will have on simulations and financial data. And you can't take that chance, whether it's one file, one user, or hundreds of thousands.
If I export 10,000 rows of data, I better get back 10,000 rows of data on import.
Whether a data field has missing or questionable values is not the issue.
Most coding will filter or throw out bad data.
But an export/import should return all rows. "