RFR(XS): 8193521: glibc wastes memory with default configuration

classic Classic list List threaded Threaded
17 messages Options
Reply | Threaded
Open this post in threaded view
|

RFR(XS): 8193521: glibc wastes memory with default configuration

Doerr, Martin
Hi,

I noticed that glibc uses many malloc arenas. By default, glibc allocates a new 128 MB malloc arena for every thread (up to a certain limit, by default 8 * processor count).
This is good for few threads which perform a lot of concurrent mallocs, but it doesn't fit well to the JVM which has its own memory management and rather allocates fewer and larger chunks.
(See glibc source code libc_malloc which calls arena_get2 in malloc.c and _int_new_arena in arena.c.)
Using only one arena significantly reduces virtual memory footprint. Saving memory seems to be more valuable for the JVM than optimizing concurrent mallocs.

I suggest to use mallopt(M_ARENA_MAX, 1). It is supported since glibc 2.15. I don't think that we still support older versions with jdk10/11.
Please review my proposal:
http://cr.openjdk.java.net/~mdoerr/8193521_glibc_malloc/webrev.00/

Running a VM on x86_64 with -Xmx128m -Xss136k -XX:MaxMetaspaceSize=32m -XX:CompressedClassSpaceSize=32m -XX:ReservedCodeCacheSize=64m showed
  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
30572 d056149   20   0 1839880 125504  22020 S 103,7  0,8   0:05.09 java

After the change, I got it down to:
  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
30494 d056149   20   0  406932 114360  22544 S 102,7  0,7   0:06.53 java

I'm not aware of performance critical concurrent malloc usages in the JVM. Maybe somebody else is?
Comments are welcome. I will also need a sponsor if this change is desired.

Best regards,
Martin

Reply | Threaded
Open this post in threaded view
|

Re: RFR(XS): 8193521: glibc wastes memory with default configuration

Andrew Dinn
Hi Martin,

On 14/12/17 16:00, Doerr, Martin wrote:
> I suggest to use mallopt(M_ARENA_MAX, 1). It is supported since glibc
> 2.15. I don't think that we still support older versions with
> jdk10/11.

 . . .

> I'm not aware of performance critical concurrent malloc usages in the
> JVM. Maybe somebody else is? Comments are welcome. I will also need a
> sponsor if this change is desired.

I appreciate the motivation for proposing this and I don't know of any
issues it might cause for the JVM itself. However, it might cause
problems for some of the libraries the JVM links or, indeed, for Java
apps that employ their own native libraries. The former risk could, in
theory, be assessed. The latter one is unquantifiable but needs to be
taken seriously. So, I think this change is probably not safe.

regards,


Andrew Dinn
-----------
Senior Principal Software Engineer
Red Hat UK Ltd
Registered in England and Wales under Company Registration No. 03798903
Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander
Reply | Threaded
Open this post in threaded view
|

Re: RFR(XS): 8193521: glibc wastes memory with default configuration

Thomas Schatzl
In reply to this post by Doerr, Martin
Hi,

On Thu, 2017-12-14 at 16:00 +0000, Doerr, Martin wrote:

> Hi,
>
> I noticed that glibc uses many malloc arenas. By default, glibc
> allocates a new 128 MB malloc arena for every thread (up to a certain
> limit, by default 8 * processor count).
> This is good for few threads which perform a lot of concurrent
> mallocs, but it doesn't fit well to the JVM which has its own memory
> management and rather allocates fewer and larger chunks.
> (See glibc source code libc_malloc which calls arena_get2 in malloc.c
> and _int_new_arena in arena.c.)
> Using only one arena significantly reduces virtual memory footprint.
> Saving memory seems to be more valuable for the JVM than optimizing
> concurrent mallocs.
>
> I suggest to use mallopt(M_ARENA_MAX, 1). It is supported since glibc
> 2.15. I don't think that we still support older versions with
> jdk10/11.
> Please review my proposal:
> http://cr.openjdk.java.net/~mdoerr/8193521_glibc_malloc/webrev.00/
>
> Running a VM on x86_64 with -Xmx128m -Xss136k
> -XX:MaxMetaspaceSize=32m -XX:CompressedClassSpaceSize=32m
> -XX:ReservedCodeCacheSize=64m showed
>   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+
> COMMAND
> 30572 d056149   20   0 1839880 125504  22020 S 103,7  0,8   0:05.09
> java
>
> After the change, I got it down to:
>   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+
> COMMAND
> 30494 d056149   20   0  406932 114360  22544 S 102,7  0,7   0:06.53
> java
>
> I'm not aware of performance critical concurrent malloc usages in the
> JVM. Maybe somebody else is?

E.g. G1 remembered sets can be a huge malloc'ed memory consumer,
although it (also partially) caches allocations from malloc.

Afair malloc performance can already be a significant performance issue
in that area.

Thanks,
  Thomas

Reply | Threaded
Open this post in threaded view
|

Re: RFR(XS): 8193521: glibc wastes memory with default configuration

Andrew Haley
In reply to this post by Doerr, Martin
On 14/12/17 16:00, Doerr, Martin wrote:
> I'm not aware of performance critical concurrent malloc usages in the JVM. Maybe somebody else is?
> Comments are welcome. I will also need a sponsor if this change is desired.

Is this something that a JVM should decide?  I would have thought it's
for the user of the system to decide policy.  They can set
MALLOC_ARENA_MAX on a system-wide basis if they really need to save
memory, or just for Java.  Or are you suggesting that Java is so
unusual that it justifies overriding a user's settings?

If a user has explicitly set MALLOC_ARENA_MAX, don't futz with it.

--
Andrew Haley
Java Platform Lead Engineer
Red Hat UK Ltd. <https://www.redhat.com>
EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671
Reply | Threaded
Open this post in threaded view
|

RE: RFR(XS): 8193521: glibc wastes memory with default configuration

Doerr, Martin
In reply to this post by Andrew Dinn
Hi Andrew,

thanks for your feedback. I'm not really an expert on this topic and I'm hoping that people on the mailing list can help evaluating the risk and the benefit.
Own native libraries are a good point. But I still hope that there are enough people out there who would like to see improvements for their use case.

Best regards,
Martin


-----Original Message-----
From: Andrew Dinn [mailto:[hidden email]]
Sent: Donnerstag, 14. Dezember 2017 17:27
To: Doerr, Martin <[hidden email]>; [hidden email]
Subject: Re: RFR(XS): 8193521: glibc wastes memory with default configuration

Hi Martin,

On 14/12/17 16:00, Doerr, Martin wrote:
> I suggest to use mallopt(M_ARENA_MAX, 1). It is supported since glibc
> 2.15. I don't think that we still support older versions with
> jdk10/11.

 . . .

> I'm not aware of performance critical concurrent malloc usages in the
> JVM. Maybe somebody else is? Comments are welcome. I will also need a
> sponsor if this change is desired.

I appreciate the motivation for proposing this and I don't know of any
issues it might cause for the JVM itself. However, it might cause
problems for some of the libraries the JVM links or, indeed, for Java
apps that employ their own native libraries. The former risk could, in
theory, be assessed. The latter one is unquantifiable but needs to be
taken seriously. So, I think this change is probably not safe.

regards,


Andrew Dinn
-----------
Senior Principal Software Engineer
Red Hat UK Ltd
Registered in England and Wales under Company Registration No. 03798903
Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander
Reply | Threaded
Open this post in threaded view
|

RE: RFR(XS): 8193521: glibc wastes memory with default configuration

Doerr, Martin
In reply to this post by Thomas Schatzl
Hi Thomas,

thanks for your input. We're currently testing the change. I haven't seen regressions in benchmarks running with G1 so far, but I think we will need to run more of them.

Best regards,
Martin


-----Original Message-----
From: Thomas Schatzl [mailto:[hidden email]]
Sent: Donnerstag, 14. Dezember 2017 17:51
To: Doerr, Martin <[hidden email]>; [hidden email]
Subject: Re: RFR(XS): 8193521: glibc wastes memory with default configuration

Hi,

On Thu, 2017-12-14 at 16:00 +0000, Doerr, Martin wrote:

> Hi,
>
> I noticed that glibc uses many malloc arenas. By default, glibc
> allocates a new 128 MB malloc arena for every thread (up to a certain
> limit, by default 8 * processor count).
> This is good for few threads which perform a lot of concurrent
> mallocs, but it doesn't fit well to the JVM which has its own memory
> management and rather allocates fewer and larger chunks.
> (See glibc source code libc_malloc which calls arena_get2 in malloc.c
> and _int_new_arena in arena.c.)
> Using only one arena significantly reduces virtual memory footprint.
> Saving memory seems to be more valuable for the JVM than optimizing
> concurrent mallocs.
>
> I suggest to use mallopt(M_ARENA_MAX, 1). It is supported since glibc
> 2.15. I don't think that we still support older versions with
> jdk10/11.
> Please review my proposal:
> http://cr.openjdk.java.net/~mdoerr/8193521_glibc_malloc/webrev.00/
>
> Running a VM on x86_64 with -Xmx128m -Xss136k
> -XX:MaxMetaspaceSize=32m -XX:CompressedClassSpaceSize=32m
> -XX:ReservedCodeCacheSize=64m showed
>   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+
> COMMAND
> 30572 d056149   20   0 1839880 125504  22020 S 103,7  0,8   0:05.09
> java
>
> After the change, I got it down to:
>   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+
> COMMAND
> 30494 d056149   20   0  406932 114360  22544 S 102,7  0,7   0:06.53
> java
>
> I'm not aware of performance critical concurrent malloc usages in the
> JVM. Maybe somebody else is?

E.g. G1 remembered sets can be a huge malloc'ed memory consumer,
although it (also partially) caches allocations from malloc.

Afair malloc performance can already be a significant performance issue
in that area.

Thanks,
  Thomas

Reply | Threaded
Open this post in threaded view
|

RE: RFR(XS): 8193521: glibc wastes memory with default configuration

Doerr, Martin
In reply to this post by Andrew Haley
Hi Andrew,

> Or are you suggesting that Java is so unusual that it justifies overriding a user's settings?

I think this kind of matches was I thought.
I guess most people don't even know that this malloc related stuff can be configured. (I didn't know much about it before I started to investigate this issue, either.)
And I also assume that most JVM users don't know that the JVM has own arena implementation etc. which makes glibc arenas under the hood useless for these kind of allocations.
So I think it would be nice if we could help these people.
 
Embedded applications or cloud applications running in containers may suffer from wasting so much virtual memory, which is really significant.

I was not sure if this is the best approach to address the issue, but I'd like to see an improvement, here.

Best regards,
Martin


-----Original Message-----
From: Andrew Haley [mailto:[hidden email]]
Sent: Donnerstag, 14. Dezember 2017 19:02
To: Doerr, Martin <[hidden email]>; [hidden email]
Subject: Re: RFR(XS): 8193521: glibc wastes memory with default configuration

On 14/12/17 16:00, Doerr, Martin wrote:
> I'm not aware of performance critical concurrent malloc usages in the JVM. Maybe somebody else is?
> Comments are welcome. I will also need a sponsor if this change is desired.

Is this something that a JVM should decide?  I would have thought it's
for the user of the system to decide policy.  They can set
MALLOC_ARENA_MAX on a system-wide basis if they really need to save
memory, or just for Java.  Or are you suggesting that Java is so
unusual that it justifies overriding a user's settings?

If a user has explicitly set MALLOC_ARENA_MAX, don't futz with it.

--
Andrew Haley
Java Platform Lead Engineer
Red Hat UK Ltd. <https://www.redhat.com>
EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671
Reply | Threaded
Open this post in threaded view
|

Re: RFR(XS): 8193521: glibc wastes memory with default configuration

Andrew Haley
On 14/12/17 18:51, Doerr, Martin wrote:
> And I also assume that most JVM users don't know that the JVM has own arena implementation etc. which makes glibc arenas under the hood useless for these kind of allocations.
> So I think it would be nice if we could help these people.

So, here's a radical thought: why are we using malloc() at all?
Shouldn't we be calling mmap() and then basing our arena in that?

--
Andrew Haley
Java Platform Lead Engineer
Red Hat UK Ltd. <https://www.redhat.com>
EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671
Reply | Threaded
Open this post in threaded view
|

Re: RFR(XS): 8193521: glibc wastes memory with default configuration

Severin Gehwolf
On Thu, 2017-12-14 at 19:06 +0000, Andrew Haley wrote:
> On 14/12/17 18:51, Doerr, Martin wrote:
> > And I also assume that most JVM users don't know that the JVM has
> > own arena implementation etc. which makes glibc arenas under the
> > hood useless for these kind of allocations.
> > So I think it would be nice if we could help these people.
>
> So, here's a radical thought: why are we using malloc() at all?
> Shouldn't we be calling mmap() and then basing our arena in that?

Failing that it would make sense to limit the max arenas based on the
cpu share's a container gets (OSContainer::active_processor_count()).
I.e. it could be part of JEP 8182070, Container aware Java. Thoughts?

Thanks,
Severin

Reply | Threaded
Open this post in threaded view
|

Re: RFR(XS): 8193521: glibc wastes memory with default configuration

Andrew Dinn
In reply to this post by Andrew Haley
On 14/12/17 19:06, Andrew Haley wrote:
> On 14/12/17 18:51, Doerr, Martin wrote:
>> And I also assume that most JVM users don't know that the JVM has own arena implementation etc. which makes glibc arenas under the hood useless for these kind of allocations.
>> So I think it would be nice if we could help these people.
>
> So, here's a radical thought: why are we using malloc() at all?
> Shouldn't we be calling mmap() and then basing our arena in that?
The JVM does that almost all of the time. My guess is that the
difference Martin is seeing is down to malloc operations done outside
libjvm.so

regards,


Andrew Dinn
-----------
Senior Principal Software Engineer
Red Hat UK Ltd
Registered in England and Wales under Company Registration No. 03798903
Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander
Reply | Threaded
Open this post in threaded view
|

RE: RFR(XS): 8193521: glibc wastes memory with default configuration

Doerr, Martin
Hi,

the behavior of the glibc can easily be traced by gdb.
I have set a breakpoint in "mmap" with cond "len>10000000" and then looked at the stack traces.

The first malloc in each new thread triggers a 128 MB mmap. This typically seems to be the initialization of thread local storage. (Other mallocs may be hidden in libraries.)

#0  __mmap (addr=addr@entry=0x0, len=len@entry=134217728, prot=prot@entry=0, flags=flags@entry=16418, fd=fd@entry=-1, offset=offset@entry=0) at ../sysdeps/unix/sysv/linux/wordsize-64/mmap.c:33
#1  0x00007ffff72403d1 in new_heap (size=135168, size@entry=2264, top_pad=<optimized out>) at arena.c:438
#2  0x00007ffff7240c21 in _int_new_arena (size=24) at arena.c:646
#3  arena_get2 (size=size@entry=24, avoid_arena=avoid_arena@entry=0x0) at arena.c:879
#4  0x00007ffff724724a in arena_get2 (avoid_arena=0x0, size=24) at malloc.c:2911
#5  __GI___libc_malloc (bytes=24) at malloc.c:2911
#6  0x00007ffff7de9ff8 in allocate_and_init (map=<optimized out>) at dl-tls.c:603
#7  tls_get_addr_tail (ti=0x7ffff713e100, dtv=0x7ffff0038890, the_map=0x6031a0) at dl-tls.c:791
#8  0x00007ffff6b596ac in Thread::initialize_thread_current() () from openjdk10/lib/server/libjvm.so

The VM typically starts more than 20 threads even when only using java -version so we mmap several useless GB.

Best regards,
Martin


-----Original Message-----
From: Andrew Dinn [mailto:[hidden email]]
Sent: Freitag, 15. Dezember 2017 10:56
To: Andrew Haley <[hidden email]>; Doerr, Martin <[hidden email]>; [hidden email]
Subject: Re: RFR(XS): 8193521: glibc wastes memory with default configuration

On 14/12/17 19:06, Andrew Haley wrote:
> On 14/12/17 18:51, Doerr, Martin wrote:
>> And I also assume that most JVM users don't know that the JVM has own arena implementation etc. which makes glibc arenas under the hood useless for these kind of allocations.
>> So I think it would be nice if we could help these people.
>
> So, here's a radical thought: why are we using malloc() at all?
> Shouldn't we be calling mmap() and then basing our arena in that?
The JVM does that almost all of the time. My guess is that the
difference Martin is seeing is down to malloc operations done outside
libjvm.so

regards,


Andrew Dinn
-----------
Senior Principal Software Engineer
Red Hat UK Ltd
Registered in England and Wales under Company Registration No. 03798903
Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander
Reply | Threaded
Open this post in threaded view
|

Re: RFR(XS): 8193521: glibc wastes memory with default configuration

Andrew Dinn
On 15/12/17 10:39, Doerr, Martin wrote:
> the behavior of the glibc can easily be traced by gdb.

Yes, indeed glibc is creating memory not needed /by the JVM/ as part
of creating the threads.

> The VM typically starts more than 20 threads even when only using
> java -version so we mmap several useless GB.

Well, that is the bone of contention here.

This space may be useless to the JVM code but it may not be useless to
other native code. If would be good if you could substantiate your claim
that this memory really is 'useless' (to all JVM users). Will your
proposed change damage performance for some users? If so then can we
scope the potential for damage and agree it is acceptable?

regards,


Andrew Dinn
-----------
Senior Principal Software Engineer
Red Hat UK Ltd
Registered in England and Wales under Company Registration No. 03798903
Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander
Reply | Threaded
Open this post in threaded view
|

Re: RFR(XS): 8193521: glibc wastes memory with default configuration

Andrew Dinn
In reply to this post by Doerr, Martin
On 15/12/17 10:39, Doerr, Martin wrote:
> The VM typically starts more than 20 threads even when only using
> java -version so we mmap several useless GB.

Also, I forgot to mention that it is not mmap calls per se that we need
to worry about but actual /physical commit/ of pages in the mmapped
region. We are not short of mappable address space and we don't incur
any significant cost by mapping the virtual address space we currently use.

I have no doubt your trace is showing a vmem page reservation rather
than a corresponding physical page commit. If the TLS regions mmapped in
your trace were really occupying several GBs of physical pages we would
already have done something about it.

regards,


Andrew Dinn
-----------
Senior Principal Software Engineer
Red Hat UK Ltd
Registered in England and Wales under Company Registration No. 03798903
Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander
Reply | Threaded
Open this post in threaded view
|

Re: RFR(XS): 8193521: glibc wastes memory with default configuration

Robbin Ehn
In reply to this post by Andrew Haley
On 12/14/2017 08:06 PM, Andrew Haley wrote:
> On 14/12/17 18:51, Doerr, Martin wrote:
>> And I also assume that most JVM users don't know that the JVM has own arena implementation etc. which makes glibc arenas under the hood useless for these kind of allocations.
>> So I think it would be nice if we could help these people.
>
> So, here's a radical thought: why are we using malloc() at all?
> Shouldn't we be calling mmap() and then basing our arena in that?

+1

/Robbin

>
Reply | Threaded
Open this post in threaded view
|

RE: RFR(XS): 8193521: glibc wastes memory with default configuration

Doerr, Martin
In reply to this post by Andrew Dinn
Hi Andrew,

that's correct. The mmaps I have observed affect virtual memory. Not all of the memory gets committed.
So there are basically 2 issues:
- virtual memory: Gets somewhat larger than needed. May be an issue for users with reduced ulimit, cloud applications in containers, embedded.
- physical memory: We're not wasting so much, but if the JVM handles all performance critical allocations by its own management, it should be worth saving.

Related to your earlier mail:
> This space may be useless to the JVM code but it may not be useless to
> other native code. If would be good if you could substantiate your claim
> that this memory really is 'useless' (to all JVM users). Will your
> proposed change damage performance for some users? If so then can we
> scope the potential for damage and agree it is acceptable?

We are testing the change and didn't see performance regressions, yet. But we have to run more benchmarks, especially with G1 (see mail from Thomas Schatzl). Large server tests will be interesting as well.

I can't really estimate the risk for native libs and I can understand the concern. Maybe it would be acceptable if the change gets switchable.

Best regards,
Martin


-----Original Message-----
From: Andrew Dinn [mailto:[hidden email]]
Sent: Freitag, 15. Dezember 2017 12:38
To: Doerr, Martin <[hidden email]>; Andrew Haley <[hidden email]>; [hidden email]
Subject: Re: RFR(XS): 8193521: glibc wastes memory with default configuration

On 15/12/17 10:39, Doerr, Martin wrote:
> The VM typically starts more than 20 threads even when only using
> java -version so we mmap several useless GB.

Also, I forgot to mention that it is not mmap calls per se that we need
to worry about but actual /physical commit/ of pages in the mmapped
region. We are not short of mappable address space and we don't incur
any significant cost by mapping the virtual address space we currently use.

I have no doubt your trace is showing a vmem page reservation rather
than a corresponding physical page commit. If the TLS regions mmapped in
your trace were really occupying several GBs of physical pages we would
already have done something about it.

regards,


Andrew Dinn
-----------
Senior Principal Software Engineer
Red Hat UK Ltd
Registered in England and Wales under Company Registration No. 03798903
Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander
Reply | Threaded
Open this post in threaded view
|

Re: RFR(XS): 8193521: glibc wastes memory with default configuration

Andrew Dinn
On 15/12/17 12:31, Doerr, Martin wrote:
> We are testing the change and didn't see performance regressions,
> yet. But we have to run more benchmarks, especially with G1 (see mail
> from Thomas Schatzl). Large server tests will be interesting as
> well.
>
> I can't really estimate the risk for native libs and I can understand
> the concern. Maybe it would be acceptable if the change gets
> switchable.

Well, I think Andrew Haley's point was that it already is switchable --
by setting MALLOC_ARENA_MAX to 1.

Of course, there is still a position between that status quo and your
patch where the JVM makes the config call to the malloc library unless
the user explicitly inhibits it e.g. via a new command line -XX option
or using an alternative env setting.

That would allow a get-out for any users affected. However, the get-out
also assumes they know this change is responsible and how to undo it. I
am not sure our support org would enjoy fielding the calls this might
give rise to.

We (Red Hat) are indeed considering setting MALLOC_ARENA_MAX=1 in some
of our cloud deployments so as to avoid this allocation cost. It might
be appropriate where, say, -Xmx is set under some low threshold.
However, if we were to do that it would be under control of the script
launching the JVM rather than OpenJDK and would not reverse any prior
setting by our users. I don't see why others cannot follow a similar path.

regards,


Andrew Dinn
-----------
Senior Principal Software Engineer
Red Hat UK Ltd
Registered in England and Wales under Company Registration No. 03798903
Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander
Reply | Threaded
Open this post in threaded view
|

RE: RFR(XS): 8193521: glibc wastes memory with default configuration

Doerr, Martin
Hi both Andrews,

thank you for looking into this issue and providing input. I have tried to summarize what we have so far and added a comment to the bug.

I'd like to leave the bug open for a little longer.

Best regards,
Martin


-----Original Message-----
From: Andrew Dinn [mailto:[hidden email]]
Sent: Freitag, 15. Dezember 2017 16:12
To: Doerr, Martin <[hidden email]>; Andrew Haley <[hidden email]>; [hidden email]
Subject: Re: RFR(XS): 8193521: glibc wastes memory with default configuration

On 15/12/17 12:31, Doerr, Martin wrote:
> We are testing the change and didn't see performance regressions,
> yet. But we have to run more benchmarks, especially with G1 (see mail
> from Thomas Schatzl). Large server tests will be interesting as
> well.
>
> I can't really estimate the risk for native libs and I can understand
> the concern. Maybe it would be acceptable if the change gets
> switchable.

Well, I think Andrew Haley's point was that it already is switchable --
by setting MALLOC_ARENA_MAX to 1.

Of course, there is still a position between that status quo and your
patch where the JVM makes the config call to the malloc library unless
the user explicitly inhibits it e.g. via a new command line -XX option
or using an alternative env setting.

That would allow a get-out for any users affected. However, the get-out
also assumes they know this change is responsible and how to undo it. I
am not sure our support org would enjoy fielding the calls this might
give rise to.

We (Red Hat) are indeed considering setting MALLOC_ARENA_MAX=1 in some
of our cloud deployments so as to avoid this allocation cost. It might
be appropriate where, say, -Xmx is set under some low threshold.
However, if we were to do that it would be under control of the script
launching the JVM rather than OpenJDK and would not reverse any prior
setting by our users. I don't see why others cannot follow a similar path.

regards,


Andrew Dinn
-----------
Senior Principal Software Engineer
Red Hat UK Ltd
Registered in England and Wales under Company Registration No. 03798903
Directors: Michael Cunningham, Michael ("Mike") O'Neill, Eric Shander