Is there a common or general way to handle the heavy data in the notebook?
I’m trying to load a ~500 MB(or over) root format file including the tree which I’ve prepared with MC.
By using uproot, the tree file will be loaded as a dataframe in pandas. (n rows × 25 columns from TNtupleD)
When I check the memory usage of ‘top’ command in the web terminal, the notebook used up to 10GB of memory when it’s crashed. (And I configured my session as using 10GB of memory)
I can handle the simple(smaller) datafile by controlling the size of the MC output dataset, but I think that there would be a better way so that I can use more and more datasets.