As of now (start 2019), Dataset are not really designed for large datasets. It has a lot of overhead, which you are feeling. Having rectangular arrays all of the same type, and using packed arrays is the best way to handle large datasets.
Datasets are convenient and very flexible but that comes at a cost because of that. Same holds for DateObjects. If you leave them as strings, the speed up is as expected:
dates = DateRange["jan. 1st, 2001", "2019"];
Map[AbsoluteTime, dates]; // AbsoluteTiming
ParallelMap[AbsoluteTime, dates, Method -> "CoarsestGrained"]; // AbsoluteTiming
But note that the interpretation from a date-string to an AbsoluteTime is a much tougher operation as compared to converting a DateObject to AbsoluteTime. The heavy lifting (the string interpretation) you had already done using e.g. SemanticImport.