Using `numpy_to_vtk` I can't seem to get arrays to stay in dataset?

I think that there are issues I am having with data starting in NumPy, but not properly “sticking” in my dataset. Peripherally discussed here. The data I am trying to put into VTK lives in a few very large files. One has geometry, and another has scalar values associated with each node (temperature, pressure, etc.)

Schematically, I start out with a `vtkPartitionedDataSet`, and *in a function* I add each block of the geometry to it.
def get_geometry(fname, pdset):
    with open(fname, "rb") as f:
        # find blocks
        for idx, b in enumerate(blocks):
            sg = vtk.vtkStructuredGrid()
            # ...
            pdset.SetPartition(idx, sg)

The geometry creation looks like it works well - I can save the file and plot it in ParaView, it looks fine.

Then, I read in the scalar data as an array of `vtkFloatArray`:
def get_scalars(fname):
    "Return list-of-list-of variables."
    solution_data = []
    with open(fname, "rb") as f:
        # ... find blocks
        for idx, b in enumerate(blocks):
            for _ in range(number_of_variables):
                d = np.fromfile(f, dtype=">f4", count=elementcount)
                d = d.reshape(size_of_block)
                solution_data.append(d.ravel().byteswap())

    return solution_data
And in yet another function, I (try to) add the data to the dataset:
def store_scalars(dataset):
    variables = get_scalars("huge_file.dat")
    namelist = get_namelist("name_list.txt")

    for j, g in enumerate(variables):
        p = data.GetPartition(j)
        for k, v in enumerate(g):
            a = numpy_to_vtk(v)
            a.SetName(namelist[k])

            logging.debug(f"vtkFloatArray.GetValueRange(): {a.GetValueRange()}")
            p.GetPointData().AddArray(a)
            p.Modified()

So adding the data, the debug statement that prints out the variable array range is nominally correct. Max and min values are about what I would expect.

But in a debug print just before I save the file to disk, the values are not correct? In short, I create vtkFloatArrays to store in the dataset, and a check at creation time shows that the arrays seem to have the right information in them. But, seconds later, when I make a check of the array contents before saving them to disk, they seem to contain garbage … (like numbers on the order of 1e38, and NaN … perhaps some random chunk of memory after a garbage collection? And I get those numbers instead of something worse like a segfault?)

Is there some way to deep copy the data out of the NumPy allocated memory and store them more permenantly inside of the vtkStructuredGrid partitions?

It seemed that the obvious answer was to not have NumPy allocate the data memory.
This is related to a problem I had earlier.

So, rather than:

a = numpy_to_vtk(large_NumPy_array)
dataset.GetPointData().AddArray(a)
dataset.Modified()

I did this:

a = vtk.vtkFloatArray()
a.SetNumberOfComponents(1)
for e in large_NumPy_array:
    a.InsertInextValue(e)

dataset.GetPointData().AddArray(a)
dataset.Modified()