Before I get started, yes I have read a possible duplicate, malloc being weird in linux, cplusplus.com on malloc and done some searching on google.
I have a scientific computing problem that requires a very large 2D array. I'm following code found in a copy of "Numerical Recipes in C", and am having a problem with unallocated memory in the middle of my 2D array. I am working in Windows, and am using c++ with MSVC 2012.
Here is my 2D array allocation
  unsigned long nrl=0;
  unsigned long nrh=749774;
  unsigned long ncl=0
  unsigned long nch=250657;
  unsigned long i, nrow=nrh-nrl+1,ncol=nch-ncl+1;
  double **m;
  if((size_t)(nrow*ncol)<(size_t)nrow){
    m=NULL;
    return m;
  }
  /*allocate pointers to rows*/
  m=(double **)malloc((size_t)(nrow)*sizeof(double*));
  if (!m){
    m=NULL;
    return m;
  }
  /*allocate rows and set pointers to them*/
  m[nrl]=(double *) malloc((size_t)((nrow*ncol)*sizeof(double)));
  if(!m[nrl]){  
    free(m[nrl]);
    free(m);
    m=NULL;
    return m;
  }
  for(i=nrl+1;i<=nrh;i++)m[i]=m[i-1]+ncol;
  /*The 2D array should now be callable as m[nrow][ncol]*/
  /*Pseudo-code below*/
  m[0][0] == Good, allocated memory
  m[125][200923] == Unallocated, crashes program
  m[nrh-1][nch-1] == Good, allocated memory
I am currently relying on malloc to return NULL if memory allocation fails (I do actually get NULL values if I try to allocate very very large arrays.
Also, I have attempted double *m = new double[nch*nrh], but that gives me a memory allocation error. I am open to any suggestions for alternative implementations, but I need to be able to know whether the allocation works and reallocate a smaller block if necessary.
EDIT:
This is a c function, but the majority of my code is in c++.
UPDATE:
Thanks to David, I was able to fix the problem. Changing my overflow check from
if((size_t)(nrow*ncol)<(size_t)nrow)
to
if(SIZE_MAX/nrow < ncol || SIZE_MAX/ncol < nrow || nrow*ncol<nrow)
allows malloc to fail when it should.
 
     
    