Java Refuses to Start – Could not reserve enough space for object heap


We have a pool of aproximately 20 linux blades. Some are running Suse, some are running Redhat. ALL share NAS space which contains the following 3 folders:

  • /NAS/app/java – a symlink that points to an installation of a Java JDK. Currently version 1.5.0_10
  • /NAS/app/lib – a symlink that points to a version of our application.
  • /NAS/data – directory where our output is written

All our machines have 2 processors (hyperthreaded) with 4gb of physical memory and 4gb of swap space. We limit the number of ‘jobs’ each machine can process at a given time to 6 (this number likely needs to change, but that does not enter into the current problem so please ignore it for the time being).

Some of our jobs set a Max Heap size of 512mb, some others reserve a Max Heap size of 2048mb. Again, we realize we could go over our available memory if 6 jobs started on the same machine with the heap size set to 2048, but to our knowledge this has not yet occurred.

The Problem

Once and a while a Job will fail immediately with the following message:

Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.

We used to chalk this up to too many jobs running at the same time on the same machine. The problem happened infrequently enough (MAYBE once a month) that we’d just restart it and everything would be fine.

The problem has recently gotten much worse. All of our jobs which request a max heap size of 2048m fail immediately almost every time and need to get restarted several times before completing.

We’ve gone out to individual machines and tried executing them manually with the same result.


It turns out that the problem only exists for our SuSE boxes. The reason it has been happening more frequently is becuase we’ve been adding more machines, and the new ones are SuSE.

‘cat /proc/version’ on the SuSE boxes give us:

Linux version 2.6.5-7.244-bigsmp ([email protected]) (gcc version 3.3.3 (SuSE Linux)) #1 SMP Mon Dec 12 18:32:25 UTC 2005

‘cat /proc/version’ on the RedHat boxes give us:

Linux version 2.4.21-32.0.1.ELsmp ([email protected]) (gcc version 3.2.3 20030502 (Red Hat Linux 3.2.3-52)) #1 SMP Tue May 17 17:52:23 EDT 2005

‘uname -a’ gives us the following on BOTH types of machines:

UTC 2005 i686 i686 i386 GNU/Linux

No jobs are running on the machine, and no other processes are utilizing much memory. All of the processes currently running might be using 100mb total.

‘top’ currently shows the following:

Mem:   4146528k total,  3536360k used,   610168k free,   132136k buffers
Swap:  4194288k total,        0k used,  4194288k free,  3283908k cached

‘vmstat’ currently shows the following:

procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
0  0      0 610292 132136 3283908    0    0     0     2   26    15  0  0 100  0

If we kick off a job with the following command line (Max Heap of 1850mb) it starts fine:

java/bin/java -Xmx1850M -cp helloworld.jar HelloWorld
Hello World

If we bump up the max heap size to 1875mb it fails:

java/bin/java -Xmx1875M -cp helloworld.jar HelloWorld
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.

It’s quite clear that the memory currently being used is for Buffering/Caching and that’s why so little is being displayed as ‘free’. What isn’t clear is why there is a magical 1850mb line where anything higher means Java can’t start.

Any explanations would be greatly appreciated.

You’re using a 32-bit OS, so you’re going to be seeing limits on the total size due to that. Other answers have covered this in more detail, so I’ll avoid repeating their information.

A behaviour that I noticed with our servers recently is that specifying a maximum heap size with -Xmx while not specifying a minimum heap size with -Xms would lead to Java’s server VM immediately attempting to allocate all of the memory needed for the maximum heap size. And sure, if the app gets up to that heap size, that’s the amount of memory that you’ll need. But the chances are, your apps will be starting out with comparitively small heaps and may require the larger heap at some later point. Additionally specifying the minimum heap size will let you start your app start with a smaller heap and gradually grow that heap.

All of this isn’t going to help you increase your maximum heap size, but I figured it might help, so…

As suggested in other responses the problem is causes by exhaustion of virtual address space. A 32bit linux userspace program is usually limited to 3GB of AS; the remaining 1GB is used by the kernel (rationale: since the top 1GB is kernel fixed mapping it’s not necessary to touch the page table when serving syscalls).

RHEL kernels, however, implement the so called 4GB/4GB split where the full 4GB AS is available to userspace processes at cost of a slight runtime overhead (the kernel lives in a separate 4GB virtual AS)

Running a 32-bit OS is a mistake; you should definitely upgrade at the earliest convenience.

I don’t know whether Java requires its heap to be in a single contiguous chunk, but if it does, asking for 1.8G of heap on a 32-bit box sounds like a tall order. You’re assuming that there is a chunk of address space, almost half of it, free at JVM startup time.

Depending on what other libraries are loaded at the time, there may not be. Libraries can allocate memory anywhere they like, so it could fragment your address space sufficiently that 1.8G is not available in one chunk.

There is only about 3G address space max available on Linux 32-bit anyway. Libraries and the JVM itself uses some to start with.

It seems that for 32-bit servers there is a JVM limitation that cannot be overcome (unless you find a special 32-bit JVM that does not impose a 2GB limit or less).

This thread on The Server Side has more details including several people who tested out various JVMs on 32-bit architectures. IBM’s JVM seems to allow 100 more MB but that’s not really going to get you what you want.

The “real” solution is to use a 64-bit server with a 64-bit JVM to get heaps larger than 2GB per process. However, it’s important to also consider the impact of increasing your address size (not just the addressable space) by using a 64-bit JVM. There will likely be performance and memory impacts for processing using less than 4GB of memory.

Food for thought: do each of these jobs really require 2GB of memory? Is there any way for the jobs to be modified to run within 1.8GB so this limit is not a problem?

ulimit max memory size and virtual memory set to unlimited?

I wrote two applications, one medium sized, and the other, fairly small. I’d fire up
the medium sized one (on linux, centos), without any args, (java server), and it would
run just fine. But when I then fired up the smaller app with “java client”, it would tell
me it couldn’t reserve enough space, and wouldn’t run. I experimented, and used the -Xms and -Xmx both with 10m, and they would both run without complaint… Go figure!

This maybe way off the track, but there are two things that spring to mind. Both of the following assume that you are running a 32bit version of Linux.

There is a process size limit on linux, seem to remember on CentOS was around 2.5gb and is configured in the kernel (i.e. a recomple to change). Your process might be hitting that once you add up all the JVM code + Permgen space and all the rest of the JVM libraries.

The second thing is something I’ve come across that you may be running out of address space, sounds wierd I know. Had a problem running Glassfish with a 1.5Gb heap, when it tried to complie a JSP my forking javac it would fail because the OS couldn’t allocate enough address space for the newly created process even though there was 12gb of memory in the box. There may be something similar going on here.

I’m afraid the only solutions to the above two where to upgrade to a 64bit kernel.

Hope this is of some use.

You need to look at upgrading your OS and Java. Java 5.0 is EOL but if you cannot update to Java 6, you could use the latest patch level 22!

32-bit Windows is limited to ~ 1.3 GB so you are doing well to se the maximum to 1.8. Note: this is a problem with continous memory, and as your system runs its memory space can get fragmented so it does not suprise me you have this problem.

A 64-bit OS, doesn’t have this problem as it has much more virtual space, you don’t even have to upgrade to a 64-bit version of java to take advantage of this.

BTW, in my experience, 32-bit Java 5.0 can be faster than 64-bit Java 5.0. It wasn’t until many years later that Java 6 update 10 was faster for 64-bit.

I upgraded the memory of a machine from 2GB to 4GB, and started to get the error straight away:

$ java -version
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.

The problem was the ulimit, which I had set at 1GB for the addressable space.
Increasing it to 2GB solved the issue.

-Xms and -Xmx had no effect.

Looks like java tries to get memory in proportion to the available memory, and fails if it can’t.

Steps to be execute …. to resolve the
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.

Step 1: Reduce the memory what earlier you used..
java -Xms128m -Xmx512m -cp simple.jar

step 2: Remove the RAM some time from the mother board and plug it and restart
* may it will release the blocking heap area memory..
java -Xms512m -Xmx1024m -cp simple.jar

Hope it will work well now… 🙂

I recently faced this issue. I have 3 java applications that start with 1024m or 1280m heap size.
Java is looking at the available space in swap, and if there is not enough memory available, the jvm exits.

To resolve the issue, I had to end several programs that had a large amount of virtual memory allocated.

I was running on x86-64 linux with a 64-bit jvm.

What is the JVM used ?
I know that BEA JRockit does not exceed 1850mB for max heap size. It does not fail but warns the user that it will not use more than 1850mB.

I don’t know why there is such a limit but I know it exists for BEA JRockit.

Best regards.

Given that none of the other suggestions have worked (including many things I’d have suggested myself), to help troubleshoot further, could you try running:

sysctl -a

On both the SuSE and RedHat machines to see if there are any differences? I’m guessing the default configurations are different between these two distributions that’s causing this.

I am using SOA env, Decreased Xmx from 1024 to 768 in setSOADomainENV.cmd resolved the issue.

REM set DEFAULT_MEM_ARGS=-Xms512m -Xmx1024m
set DEFAULT_MEM_ARGS=-Xms512m -Xmx768m

In Windows, I solved this problem editing directly the file /bin/cassandra.bat, changing the value of the “Xms” and “Xmx” JVM_OPTS parameters. You can try to edit the /bin/cassandra file. In this file I see an commented variable JVM_OPTS, try to uncomment and edit it.

Leave a Comment