I am reading some very large vtp,obj,ply files and accessing the data with the numpy_interface.dataset adapter, and then performing some operations on the results. My program for large files takes about 40 seconds, and I was looking into using py multiprocessing to parallelize my operations on the dataset_adapter.PolyData object but it cannot be serialized. Does anyone know a work around for this? I have passed just the files as arguments with multiprocessing but since I have to read the file for each operation, it only cuts down about 4 seconds. Thanks!