VL - Memory Access violation with big arrays and nested for loops

I’m processing an image using OpenCV (OpenCVSharp wrapper) in C# in VL. The images in question can get really big, 4k or 8k resolution (4096 x 2048, for example).

Part of my logic involves filling up a couple different float or float2 arrays with information from the parsing.

I can consistently crash VL with larger images of 4k resolution, and if I catch the exception in a debugger from Visual Studio, it complains about an external VL process trying to access bad memory. My logic works for smaller images without crashing. Is there some kind of memory limit within VL processes?

I can typically get it crash within the first nested for loop sequence, (i.e., where float weight = getLuminance(colour); is) but I can’t debug far enough to figure out how far it gets into the image because the crash always occurs in a VL process, not my node.

Code snippet below for reference:

 public static void BuildDistributions(OpenCvSharp.Mat envMap, out Mat marginalDistTexture, out Mat conditionalDistTexture, bool enabled = false)
        {
            marginalDistTexture = new Mat();
            conditionalDistTexture = new Mat();

            if (!enabled)
            {
                return;
            }

            int width = envMap.Width;
            int height = envMap.Height;
            int[] sizes = {width, height};

            Point2f[] marginalData = new Point2f[height];
            Point2f[] conditionalData = new Point2f[width * height];

            float[] pdf2d = new float[width * height];
            float[] cdf2d = new float[width * height];
            float[] pdf1d = new float[height];
            float[] cdf1d = new float[height];

            float colWeightSum = 0.0f;

            for (int y = 0; y < height; y++)
            {
                float rowWeightSum = 0.0f;

                for (int x = 0; x < width; x++)
                {
                    Vec3f colour = envMap.At<Vec3f>(x, y);
                    float weight = getLuminance(colour);
                    rowWeightSum += weight;

                    pdf2d[y * width + x] = weight;
                    cdf2d[y * width + x] = rowWeightSum;
                }

                for (int x = 0; x < width; x++)
                {
                    pdf2d[y * width + x] /= rowWeightSum;
                    cdf2d[y * width + x] /= rowWeightSum;
                }

                colWeightSum += rowWeightSum;
                pdf1d[y] = rowWeightSum;
                cdf1d[y] = colWeightSum;
            }

            for (int y = 0; y < height; y++)
            {
                cdf1d[y] /= colWeightSum;
                pdf1d[y] /= colWeightSum;
            }

            // pre-calculate row and col to avoid a binary search in the shader
            for (int i = 0; i < height; i++)
            {
                float invHeight = (float)i / height;
                int row = LowerBound(ref cdf1d, 0, height, invHeight);
                Point2f result = new Point2f(row / (float)height, pdf1d[i]);
                marginalData[i] = result;
            }

            for (int y = 0; y < height; y++)
            {
                for (int x = 0; x < width; x++)
                {
                    float invWidth = (float) x / width;
                    int col = LowerBound(ref cdf2d, y * width, (y + 1) * width, invWidth) - y * width;
                    Point2f result = new Point2f(col / (float)width, pdf2d[y * width + x]);
                    conditionalData[y * width + x] = result;
                }
            }

            // set the output vars
            marginalDistTexture = new Mat(height, 1, MatType.CV_32FC2, marginalData);
            conditionalDistTexture = new Mat(width, height, MatType.CV_32FC2, conditionalData);
        }

Good chance its related to this:
https://stackoverflow.com/questions/6107322/memory-limitations-in-a-64-bit-net-application

yes, you could try setting the gcAllowVeryLargeObjects flag in the vvvv.exe.config file. just put it below the other gc related flags:

but it seems:

The maximum number of elements in an array is still 2^32-1, though.

anyway, a 4k or 8k image is way below all that… i wonder whether that’s a limitation of VL, openCV or a bug in your code. could you send us a small patch which uses your operation so we can debug it?

Sure thing.

I uploaded a zip file containing a patch and related DLL’s below:

If you want an example of a texture I can crash VL with, try this one at 4k.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.