I am so sorry that I am joining the discussion so late. I thought I would get a notification if I was mentioned in some post or I may have overlooked it.
Yes! I have done a lot of research on graph complexity and I think I can probably safely say that I have an edge in the subject as I think I have come up with complexity characterizations of graphs and networks that are robust enough while I have revealed ways in which graph and network cannot be characterized, e.g. with Entropy. In this paper, we show how Entropy can easily be fooled or one has to make an arbitrary choice of a feature of interest, in which case you can replace your 'complexity function' (Entropy) by simply a function of the selected feature (e.g. edge density, degree sequence normality, etc).
Low Algorithmic Complexity Entropy-deceiving Graphs
So one may think, as it is suggested, that using ByteCount@Compress[network] would then make things better, and they would if there were lossless compression algorithms based upon measures other than Shannon Entropy rate! Indeed, Compress, just as LZW (and thus gzip, png and so on) are compression algorithms that look for statistical repetition and are thus nothing else but Entropy rate estimators up to the moving window length of the compression algorithm (LZW is proven to be 'universal' because the window length is a function of the running computer memory, but obviously is finite, but even if 'universal' in that sense, it is not 'universal' in the Turing algorithmic sense).
Notice also that one major issue with most (if not all, I suspect) other measures is that they are either based on linear structures such as degree sequence, or the linearize the network in some way (even as to flatten their adjacency matrices), while some others are not robust to change of network description or provide disparate values for isomorphic graphs, so loads of issues.
So what options we have? The only option around is to approximate thus the algorithmic complexity of graph (because its invariance theorem guarantees robustness) using some of the methods I introduced that are truly related to algorithmic complexity beyond Shannon Entropy, this is by way of Algorithmic Probability. Notice that the proposal for graph complexity that we introduced is a native measure of the graph dimension (e.g. dimension 2 when looking at its adjacency matrix) and thus does not linearize artificially the graph, and it we have shown both theoretically and numerically that the measure is robust and retrieves the expected complexity both for labelled and unlabelled graphs. Papers showing how and giving the technical details and sometimes some code or data to do this by your own:
Methods of Information Theory and Algorithmic Complexity for Network Biology (H. Zenil, N.A. Kiani and J. Tegnér)
Seminars in Cell and Developmental Biology, vol. 51, pp. 32-43, 2016.
Available here.
Two-Dimensional Kolmogorov Complexity and Validation of the Coding Theorem Method by Compressibility (H. Zenil, F. Soler-Toscano, J.-P. Delahaye and N. Gauvrit)
PeerJ Computer Science, 1:e23, 2015
Available here.
and
Correlation of Automorphism Group Size and Topological Properties with Program-size Complexity Evaluations of Graphs and Complex Networks (H. Zenil, F. Soler-Toscano, K. Dingle and A. Louis)
Physica A: Statistical Mechanics and its Applications, vol. 404, pp. 341358, 2014.
Paper available here. Preprint available here.
Finally, to explain all this, I have created 2 videos available here. And, specifically, for Graph Complexity, here.
Some WL code, and even an online complexity calculator is available here.
Sorry for the late reply @VitaliyKaurov!