I am working on a project that involves a high-dimensional matrix, pTA, of size 8*(p+2). I attempted to compute the sum of its negative eigenvalues using a supercomputer with 128 GB of RAM. For p=300, the computation took around 4 hours, and the memory usage reached 95%. However, I have realized that p=300 is insufficient for my research, and I need to set p=700. Given that the supercomputer struggled with p=300, I am certain it will not be able to handle p=700.
I suspect that my Mathematica code is not optimized, which may be causing excessive memory usage and long computation times. Specifically, I believe that in my ParallelTable loop, Mathematica is computing all eigenvalues at each step, leading to inefficiencies.
Can someone help me optimize my program to make it significantly faster and less memory-intensive, so it can handle larger values of p efficiently?
Attachments: