I have a very large array of doubles that I am using a disk-based file and a paging List of MappedByteBuffers to handle, see this question for more background. I am running on Windows XP using Java 1.5.
Here is the key part of my code that does the allocation of the buffers against the file...
try 
{
 // create a random access file and size it so it can hold all our data = the extent x the size of a double
 f = new File(_base_filename);
 _filename = f.getAbsolutePath();
 _ioFile = new RandomAccessFile(f, "rw");
 _ioFile.setLength(_extent * BLOCK_SIZE);
    _ioChannel = _ioFile.getChannel();
    // make enough MappedByteBuffers to handle the whole lot
 _pagesize = bytes_extent;
 long pages = 1;
 long diff = 0;
 while (_pagesize > MAX_PAGE_SIZE)
 {
  _pagesize  /= PAGE_DIVISION;
  pages *= PAGE_DIVISION;
  // make sure we are at double boundaries.  We cannot have a double spanning pages
  diff = _pagesize  % BLOCK_SIZE;
  if (diff != 0) _pagesize  -= diff;
 }
 // what is the difference between the total bytes associated with all the pages and the
 // total overall bytes?  There is a good chance we'll have a few left over because of the
 // rounding down that happens when the page size is halved
 diff = bytes_extent - (_pagesize  * pages);
 if (diff > 0)
 {
  // check whether adding on the remainder to the last page will tip it over the max size
  // if not then we just need to allocate the remainder to the final page
  if (_pagesize  + diff > MAX_PAGE_SIZE)
  {
   // need one more page
   pages++;
  }
 }
 // make the byte buffers and put them on the list
 int size = (int) _pagesize ;  // safe cast because of the loop which drops maxsize below Integer.MAX_INT
 int offset = 0;
 for (int page = 0; page < pages; page++)
 {
  offset = (int) (page * _pagesize );
  // the last page should be just big enough to accommodate any left over odd bytes
  if ((bytes_extent - offset) < _pagesize )
  {
   size = (int) (bytes_extent - offset);
  }
  // map the buffer to the right place 
     MappedByteBuffer buf = _ioChannel.map(FileChannel.MapMode.READ_WRITE, offset, size);
     // stick the buffer on the list
     _bufs.add(buf);
 }
 Controller.g_Logger.info("Created memory map file :" + _filename);
 Controller.g_Logger.info("Using " + _bufs.size() + " MappedByteBuffers");
    _ioChannel.close();
    _ioFile.close(); 
} 
catch (Exception e) 
{
 Controller.g_Logger.error("Error opening memory map file: " + _base_filename);
 Controller.g_Logger.error("Error creating memory map file: " + e.getMessage());
 e.printStackTrace();
 Clear();
    if (_ioChannel != null) _ioChannel.close();
    if (_ioFile != null) _ioFile.close();
 if (f != null) f.delete();
 throw e;
} 
I get the error mentioned in the title after I allocate the second or third buffer.
I thought it was something to do with contiguous memory available, so have tried it with different sizes and numbers of pages, but to no overall benefit.
What exactly does "Not enough storage is available to process this command" mean and what, if anything, can I do about it?
I thought the point of MappedByteBuffers was the ability to be able to handle structures larger than you could fit on the heap, and treat them as if they were in memory.
Any clues?
EDIT:
In response to an answer below (@adsk) I changed my code so I never have more than a single active MappedByteBuffer at any one time. When I refer to a region of the file that is currently unmapped I junk the existing map and create a new one. I still get the same error after about 3 map operations.
The bug quoted with GC not collecting the MappedByteBuffers still seems to be a problem in JDK 1.5.