Утечка памяти не"кучи" JVM

У меня есть немного понимания проблемы Вы, но я думаю, что Вы ищете установку в качестве примера.

Викимедиа (Парни Википедии) имеет общедоступный сервер Nagios, который кажется, что это точно, в чем Вы нуждаетесь. Проверьте его здесь: http://nagios.wikimedia.org/

3
задан 4 April 2014 в 09:38
2 ответа

There are a few possibilities given what you've shared, for example:

  • a leaky JNI library, or,
  • a thread-creation leak, or
  • leaky dynamic code proxies (perm-gen leak),

but I can only make a guess because you didn't provide any log output, or indicate whether the JVM was throwing an OutOfMemoryException (OOM), or if some other fault was encountered. Nor did you mention what garbage collector was in use, though if the flags shown above are the only JVM options in use, it's the CMS collector.

The first step is to make actions of the garbage collector observable by adding these flags:

-XX:+PrintTenuringDistribution
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-XX:+HeapDumpOnOutOfMemoryError
-Xloggc:/path/to/garbage.log

If it is indeed an OOM, you one can analyze the heap dump with VisualVM or similar tool. I also use VisualVM to monitor GC action in-situ via JMX. Visibility to JVM internals via can be enabled by these JVM flags:

-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=4231
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false

Additional resources:

Update

The log indeed helps. Thank you. That particular log shows that it ran out of physical memory before it could grow heap to it's configured maximum. It tried to malloc ~77M and there was only ~63M physical left:

Native memory allocation (malloc) failed to allocate 77873152 bytes for committing reserved memory.

..

/proc/meminfo: MemTotal: 1018724 kB MemFree: 63048 kB

Here's what I would do:

  1. Reduce heap so that it "fits" on the machine. Set min and max heap to same value so you can tell if it will fit immediately - it won't start up if it doesn't fit.

  2. You could reduce the Java stack size (-Xss), but this thing doesn't seem to be making a whole lot of threads so the savings won't be more than a Mb or two. I think the default for 64-bit Linux is 256k. Reduce it too much and it'll start OOM-ing on stack allocs.

  3. Repeat test.

  4. When it's been running under load for a short while, produce an on-demand heap dump for differential diagnosis using jmap -dump:file=path_to_file .

  5. One of two things should happen: (a) if there is a leak, it will fail again eventually, but the type of OOM ought to be different, or (b) there isn't a leak such that GC will just work harder and you're done. Given that you tried that before, the former case is likely, unless your reduced max size didn't fit either.

  6. If it does OOM, compare the two dumps to see what grew using jhat or some other heap analyzer.

Good luck!

3
ответ дан 3 December 2019 в 06:34

Попробуйте запустить процесс в 64-битном режиме, добавив: -D64 к флагам запуска JVM.

Вы можете запустить pmap $ JVMPID , чтобы посмотреть, как распределяется виртуальная память. Запустите его, пока он не вылетел

0
ответ дан 3 December 2019 в 06:34

Теги

Похожие вопросы