RFC: Epsilon GC JEP

classic Classic list List threaded Threaded
22 messages Options
12
Reply | Threaded
Open this post in threaded view
|

RFC: Epsilon GC JEP

Aleksey Shipilev-4
Hi,

I would like to solicit feedback on Epsilon GC JEP:
  https://bugs.openjdk.java.net/browse/JDK-8174901
  http://openjdk.java.net/jeps/8174901

The JEP text should be pretty self-contained, but we can certainly add more
points after the discussion happens.

For the last few months, there were quite a few instances where Epsilon proved a
good vehicle to do GC performance research, especially on object locality and
code generation fronts. I think it also serves as the trivial target for
Erik's/Roman's GC interface work.

The implementation and tests are there in the Sandbox, for those who are curious.

Thanks,
-Aleksey


signature.asc (836 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: RFC: Epsilon GC JEP

Aleksey Shipilev-4
No comments? I'll ask OpenJDK Lead to move this JEP to Candidate soon then.

Thanks,
-Aleksey

On 07/10/2017 10:14 PM, Aleksey Shipilev wrote:

> Hi,
>
> I would like to solicit feedback on Epsilon GC JEP:
>   https://bugs.openjdk.java.net/browse/JDK-8174901
>   http://openjdk.java.net/jeps/8174901
>
> The JEP text should be pretty self-contained, but we can certainly add more
> points after the discussion happens.
>
> For the last few months, there were quite a few instances where Epsilon proved a
> good vehicle to do GC performance research, especially on object locality and
> code generation fronts. I think it also serves as the trivial target for
> Erik's/Roman's GC interface work.
>
> The implementation and tests are there in the Sandbox, for those who are curious.
>
> Thanks,
> -Aleksey
>


signature.asc (836 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: RFC: Epsilon GC JEP

Erik Helin-2
Hi Aleksey,

first of all, thanks for trying this out and starting a discussion.
Regarding the JEP, I have a few questions/comments:
- the JEP specifies "last-drop performance improvements" as a
   motivation. However, I think you also know that taking a pause and
   compacting a heap that is mostly filled with garbage most likely
   results in higher throughput*. So are you thinking in terms of pauses
   here when you say performance?
- why do you think Epsilon GC is a good baseline? IMHO, no barriers is
   not the perfect baseline, since it is just a theoretical exercise.
   Just cranking up the heap and using Serial is more realistic
   baseline, but even using that as a baseline is questionable.
- the JEP specifies this as an experimental feature, meaning that you
   intend non-JVM developers to be able to run this. Have you considered
   the cost of supporting this option? You say "New jtreg tests under
   hotspot/gc/epsilon would be enough to assert correctness". For which
   platforms? How often should these tests be run, every night? Whenever
   we want to do large changes, like updating logging, tracing, etc,
   will we have to take Epsilon GC into account? Will there be
   serviceability support for Epsilon GC, like jstat, MXBeans, perf
   counters etc?
- You quote "The experience, however, tells that many players in the
   Java ecosystem already did this exercise with expunging GC from their
   custom-built JVMs". So it seems that those users that want something
   like Epsilon GC are fine with building OpenJDK themselves? Having
   -XX:+UseEpsilonGC as a developer flag is much different compared to
   exposing it (and supporting, even if in experimental mode) to users.

   Please recall that even removing/changing an experimental flag
   requires a CSR request and careful motivation as why you want to
   remove it.

I guess most of my question can be summarized as: this seems like it
perhaps could be useful tool for JVM GC developers, why do you want to
expose the flag to non-JVM developers (given all the
work/support/maintenance that comes with that)?

It is _great_ that you are experimenting and trying out new ideas in the
VM, please continue doing that! Please don't interpret my
questions/comments as to grumpy, this is just my experience from
maintaining 5-6 different GC algorithms for more than five years that is
speaking. There is _always_ a maintenance cost :)

Thanks,
Erik

* almost always. There will of course be scenarios where the throughput
could be higher without compacting.

On 07/18/2017 10:55 AM, Aleksey Shipilev wrote:

> No comments? I'll ask OpenJDK Lead to move this JEP to Candidate soon then.
>
> Thanks,
> -Aleksey
>
> On 07/10/2017 10:14 PM, Aleksey Shipilev wrote:
>> Hi,
>>
>> I would like to solicit feedback on Epsilon GC JEP:
>>   https://bugs.openjdk.java.net/browse/JDK-8174901
>>   http://openjdk.java.net/jeps/8174901
>>
>> The JEP text should be pretty self-contained, but we can certainly add more
>> points after the discussion happens.
>>
>> For the last few months, there were quite a few instances where Epsilon proved a
>> good vehicle to do GC performance research, especially on object locality and
>> code generation fronts. I think it also serves as the trivial target for
>> Erik's/Roman's GC interface work.
>>
>> The implementation and tests are there in the Sandbox, for those who are curious.
>>
>> Thanks,
>> -Aleksey
>>
>
>
Reply | Threaded
Open this post in threaded view
|

Re: RFC: Epsilon GC JEP

Aleksey Shipilev-4
Hi Erik,

Thanks for looking into this!

On 07/18/2017 12:09 PM, Erik Helin wrote:
> first of all, thanks for trying this out and starting a discussion. Regarding
> the JEP, I have a few questions/comments:
> - the JEP specifies "last-drop performance improvements" as a
>   motivation. However, I think you also know that taking a pause and
>   compacting a heap that is mostly filled with garbage most likely
>   results in higher throughput*. So are you thinking in terms of pauses
>   here when you say performance?

This cuts both ways: while it is true that moving GC improves locality [1], it
is also true that the runtime overhead from barriers can be quite high [2, 3,
4]. So, "performance" in that section is tied to both throughput (no barriers)
and pauses (no pauses).

[1] https://shipilev.net/jvm-anatomy-park/11-moving-gc-locality
[2] https://shipilev.net/jvm-anatomy-park/13-intergenerational-barriers
[3] Also, remember the reason for UseCondCardMark
[4] Also, remember the whole thing about G1 barriers

> - why do you think Epsilon GC is a good baseline? IMHO, no barriers is
>   not the perfect baseline, since it is just a theoretical exercise.
>   Just cranking up the heap and using Serial is more realistic
>   baseline, but even using that as a baseline is questionable.

It sometimes is. Non-generational GC is a good baseline for some workloads. Even
Serial does not cut it, because even if you crank up old and trim down young,
there is no way to disable reference write barrier store that maintains card tables.

> - the JEP specifies this as an experimental feature, meaning that you
>   intend non-JVM developers to be able to run this. Have you considered
>   the cost of supporting this option? You say "New jtreg tests under
>   hotspot/gc/epsilon would be enough to assert correctness". For which
>   platforms? How often should these tests be run, every night?

I think for all platforms, somewhere in hs-tier3? IMO, current test set in
hotspot/gc/epsilon is fairly complete, and it takes less than a minute on my
4-core i7.

> Whenever we want to do large changes, like updating logging, tracing, etc,
> will we have to take Epsilon GC into account? Will there be serviceability
> support for Epsilon GC, like jstat, MXBeans, perf counters etc?
I tried to address the maintenance costs in the JEP? It is unlikely to cause
trouble, since it mostly calls into the shared code. And GC interface work would
hopefully make BarrierSet into more shareable chunk of interface, which makes
the whole thing even more self-contained. There is some new code in MemoryPools
that handles the minimal diagnostics. MXBeans still work, at least ThreadMXBean
that reports allocation pressure, although I'd need to add a test to assert that.

To me, if the no-op GC requires much maintenance whenever something in JVM is
changing, that points to the insanity of GC interface. No-op GC is a good canary
in the coalmine for this. This is why one of the motivations is seeing what
exactly a minimal GC should support to be functional.


> - You quote "The experience, however, tells that many players in the
>   Java ecosystem already did this exercise with expunging GC from their
>   custom-built JVMs". So it seems that those users that want something
>   like Epsilon GC are fine with building OpenJDK themselves? Having
>   -XX:+UseEpsilonGC as a developer flag is much different compared to
>   exposing it (and supporting, even if in experimental mode) to users.

There is a fair share of survivorship bias: we know about people who succeeded,
do we know how many failed or given up? I think developers who do day-to-day
Hotspot development grossly underestimate the effort required to even build a
custom JVM. Most power users I know have did this exercise with great pains. I
used to sing the same song to them: just build OpenJDK yourself, but then pesky
details pour in. Like: oh, Windows, oh, Cygwin, oh MacOS, oh XCode, oh FreeType,
oh new compilers that build OpenJDK with warnings and build does treat warnings
as errors, oh actual API mismatches against msvcrt, glibc, whatever, etc. etc.
etc. As much as OpenJDK build improved over the years, I am not audacious enough
to claim it would ever be a completely smooth experience :) Now I am just
willingly hand them binary builds.

So I think having the experimental feature available in the actual product build
extends the feature exposure. For example, suppose you are the academic writing
a paper on GC, would you accept custom-build JVM into your results, or would you
rather pick up the "gold" binary build from a standard distribution and run with it?


> I guess most of my question can be summarized as: this seems like it perhaps
> could be useful tool for JVM GC developers, why do you want to expose the flag
> to non-JVM developers (given all the work/support/maintenance that comes with
> that)?

My initial thought was that the discussion about the costs should involve
discussing the actual code. This is why there is a complete implementation in
the Sandbox, and also the webrev posted.

In the months following my initial (crazy) experiments, I had multiple people
coming to me and asking when Epsilon is going to be in JDK, because they want to
use it. And those were the ultra-power-users who actually know what they are
doing with their garbage-free applications.

So the short answer about why Epsilon is good to have in product is because the
cost seems low, the benefits are present, and so cost/benefit is still low.


> It is _great_ that you are experimenting and trying out new ideas in the VM,
> please continue doing that! Please don't interpret my questions/comments as
> to grumpy, this is just my experience from maintaining 5-6 different GC
> algorithms for more than five years that is speaking. There is _always_ a
> maintenance cost :)

Yeah, I know how that feels. Look at the actual Epsilon changes, do they look
scary to you, given your experience maintaining the related code?

Thanks,
-Aleksey


signature.asc (836 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: RFC: Epsilon GC JEP

Erik Helin-2
On 07/18/2017 01:23 PM, Aleksey Shipilev wrote:

> Hi Erik,
>
> Thanks for looking into this!
>
> On 07/18/2017 12:09 PM, Erik Helin wrote:
>> first of all, thanks for trying this out and starting a discussion. Regarding
>> the JEP, I have a few questions/comments:
>> - the JEP specifies "last-drop performance improvements" as a
>>   motivation. However, I think you also know that taking a pause and
>>   compacting a heap that is mostly filled with garbage most likely
>>   results in higher throughput*. So are you thinking in terms of pauses
>>   here when you say performance?
>
> This cuts both ways: while it is true that moving GC improves locality [1], it
> is also true that the runtime overhead from barriers can be quite high [2, 3,
> 4]. So, "performance" in that section is tied to both throughput (no barriers)
> and pauses (no pauses).
>
> [1] https://shipilev.net/jvm-anatomy-park/11-moving-gc-locality
> [2] https://shipilev.net/jvm-anatomy-park/13-intergenerational-barriers
> [3] Also, remember the reason for UseCondCardMark
> [4] Also, remember the whole thing about G1 barriers

Absolutely, barriers can come with an overhead. But a barrier that
consists of dirtying a card does not come with a quite high overhead. In
fact, it comes with a very low overhead :)

>> - why do you think Epsilon GC is a good baseline? IMHO, no barriers is
>>   not the perfect baseline, since it is just a theoretical exercise.
>>   Just cranking up the heap and using Serial is more realistic
>>   baseline, but even using that as a baseline is questionable.
>
> It sometimes is. Non-generational GC is a good baseline for some workloads. Even
> Serial does not cut it, because even if you crank up old and trim down young,
> there is no way to disable reference write barrier store that maintains card tables.

I will still point out though that a GC without a barrier is still just
a theoretical baseline. One could imagine a single-gen mark-compact GC
for OpenJDK (that would require no barriers), but AFAIK almost all users
prefer the slight overhead of dirtying a card (and in return get a
generational GC) for the use cases where a single-gen mark-compact
algorithm would be applicable.

>> - the JEP specifies this as an experimental feature, meaning that you
>>   intend non-JVM developers to be able to run this. Have you considered
>>   the cost of supporting this option? You say "New jtreg tests under
>>   hotspot/gc/epsilon would be enough to assert correctness". For which
>>   platforms? How often should these tests be run, every night?
>
> I think for all platforms, somewhere in hs-tier3? IMO, current test set in
> hotspot/gc/epsilon is fairly complete, and it takes less than a minute on my
> 4-core i7.
>
>> Whenever we want to do large changes, like updating logging, tracing, etc,
>> will we have to take Epsilon GC into account? Will there be serviceability
>> support for Epsilon GC, like jstat, MXBeans, perf counters etc?
> I tried to address the maintenance costs in the JEP? It is unlikely to cause
> trouble, since it mostly calls into the shared code. And GC interface work would
> hopefully make BarrierSet into more shareable chunk of interface, which makes
> the whole thing even more self-contained. There is some new code in MemoryPools
> that handles the minimal diagnostics. MXBeans still work, at least ThreadMXBean
> that reports allocation pressure, although I'd need to add a test to assert that.
>
> To me, if the no-op GC requires much maintenance whenever something in JVM is
> changing, that points to the insanity of GC interface. No-op GC is a good canary
> in the coalmine for this. This is why one of the motivations is seeing what
> exactly a minimal GC should support to be functional.

Again, our opinions differ on this. Am I all for changing the GC
interface? Yes, I have expressed nothing but full support of the great
work that Roman is doing. Do I think we need something like a canary in
the coalmine for JVM internal, GC internal, code? No. If you want
anything resembling a canary, write a unit test using googletest that
exercises the interface.

However, again, this might be useful for someone who wants try to do
some changes to the JVM GC code. But that, to me, is not enough to
expose it to non-JVM developers. It could be useful to have in the
source code though, maybe like a --with-jvm-feature kind of thing?

>> - You quote "The experience, however, tells that many players in the
>>   Java ecosystem already did this exercise with expunging GC from their
>>   custom-built JVMs". So it seems that those users that want something
>>   like Epsilon GC are fine with building OpenJDK themselves? Having
>>   -XX:+UseEpsilonGC as a developer flag is much different compared to
>>   exposing it (and supporting, even if in experimental mode) to users.
>
> There is a fair share of survivorship bias: we know about people who succeeded,
> do we know how many failed or given up? I think developers who do day-to-day
> Hotspot development grossly underestimate the effort required to even build a
> custom JVM. Most power users I know have did this exercise with great pains. I
> used to sing the same song to them: just build OpenJDK yourself, but then pesky
> details pour in. Like: oh, Windows, oh, Cygwin, oh MacOS, oh XCode, oh FreeType,
> oh new compilers that build OpenJDK with warnings and build does treat warnings
> as errors, oh actual API mismatches against msvcrt, glibc, whatever, etc. etc.
> etc. As much as OpenJDK build improved over the years, I am not audacious enough
> to claim it would ever be a completely smooth experience :) Now I am just
> willingly hand them binary builds.

Such users will still be able to get binary builds if someone is willing
to produce them with Epsilon GC. There are plenty of OpenJDK binary
builds available from various organizations/companies.

> So I think having the experimental feature available in the actual product build
> extends the feature exposure. For example, suppose you are the academic writing
> a paper on GC, would you accept custom-build JVM into your results, or would you
> rather pick up the "gold" binary build from a standard distribution and run with it?

I guess such researcher would be producing a build from the same source
as the one the made changes to? How could they otherwise do any kind of
reasonable comparison?

>> I guess most of my question can be summarized as: this seems like it perhaps
>> could be useful tool for JVM GC developers, why do you want to expose the flag
>> to non-JVM developers (given all the work/support/maintenance that comes with
>> that)?
>
> My initial thought was that the discussion about the costs should involve
> discussing the actual code. This is why there is a complete implementation in
> the Sandbox, and also the webrev posted.
>
> In the months following my initial (crazy) experiments, I had multiple people
> coming to me and asking when Epsilon is going to be in JDK, because they want to
> use it. And those were the ultra-power-users who actually know what they are
> doing with their garbage-free applications.
>
> So the short answer about why Epsilon is good to have in product is because the
> cost seems low, the benefits are present, and so cost/benefit is still low.

And it is here that our opinions differ :) For you the maintenance cost
is low, whereas for me, having yet another command-line flag, yet
another code path, gets in the way. You have to respect that we have
different background and experiences here.

>> It is _great_ that you are experimenting and trying out new ideas in the VM,
>> please continue doing that! Please don't interpret my questions/comments as
>> to grumpy, this is just my experience from maintaining 5-6 different GC
>> algorithms for more than five years that is speaking. There is _always_ a
>> maintenance cost :)
>
> Yeah, I know how that feels. Look at the actual Epsilon changes, do they look
> scary to you, given your experience maintaining the related code?

I don't like taking the role of the grumpy open source maintainer :) No,
the code is not scary, code is rarely scary IMO, it is just code.
Running tests, fixing that a test -Xmx1g isn't run on a RPi, having
additional code paths, more cases to take into consideration when
refactoring, is burdensome. And to me, the benefits of benchmarking
against Epsilon vs benchmarking against Serial/Parallel isn't that high
to me.

But, I can understand that it is useful when trying to evaluate for
example the cost of stores into a HashMap. Which is why I'm not against
the code, but I'm not keen on exposing this to non-JVM developers.

Thanks,
Erik

> Thanks,
> -Aleksey
>
Reply | Threaded
Open this post in threaded view
|

Re: RFC: Epsilon GC JEP

Roman Kennke-6
In reply to this post by Aleksey Shipilev-4
Hi Aleksey,

what speaks against doing full GCs when memory runs out?

I can imagine scenarios when it could be useful to allow full-GCs:

1. Allow full-GCs only on System.gc()... for testing? Or for control
fanatics?
2. Allow full-GCs only on OOM.. for containerized apps or as replacement
for letting the process die and respawn (i.e. don't care at all about
pauses, but care about throughput and absolutely-no-barriers)
3. Allow full-GCs in both cases

I can see this enabled/disabled selectively by flags.

Yes, I know, complexity, maintenance, etc blah blah ;-) But it should be
very simple to do. Reuse markSweep.cpp should do it.

Basically serial GC without the generational barriers.

What do you think?

Roman

Am 18.07.2017 um 13:23 schrieb Aleksey Shipilev:

> Hi Erik,
>
> Thanks for looking into this!
>
> On 07/18/2017 12:09 PM, Erik Helin wrote:
>> first of all, thanks for trying this out and starting a discussion. Regarding
>> the JEP, I have a few questions/comments:
>> - the JEP specifies "last-drop performance improvements" as a
>>   motivation. However, I think you also know that taking a pause and
>>   compacting a heap that is mostly filled with garbage most likely
>>   results in higher throughput*. So are you thinking in terms of pauses
>>   here when you say performance?
> This cuts both ways: while it is true that moving GC improves locality [1], it
> is also true that the runtime overhead from barriers can be quite high [2, 3,
> 4]. So, "performance" in that section is tied to both throughput (no barriers)
> and pauses (no pauses).
>
> [1] https://shipilev.net/jvm-anatomy-park/11-moving-gc-locality
> [2] https://shipilev.net/jvm-anatomy-park/13-intergenerational-barriers
> [3] Also, remember the reason for UseCondCardMark
> [4] Also, remember the whole thing about G1 barriers
>
>> - why do you think Epsilon GC is a good baseline? IMHO, no barriers is
>>   not the perfect baseline, since it is just a theoretical exercise.
>>   Just cranking up the heap and using Serial is more realistic
>>   baseline, but even using that as a baseline is questionable.
> It sometimes is. Non-generational GC is a good baseline for some workloads. Even
> Serial does not cut it, because even if you crank up old and trim down young,
> there is no way to disable reference write barrier store that maintains card tables.
>
>> - the JEP specifies this as an experimental feature, meaning that you
>>   intend non-JVM developers to be able to run this. Have you considered
>>   the cost of supporting this option? You say "New jtreg tests under
>>   hotspot/gc/epsilon would be enough to assert correctness". For which
>>   platforms? How often should these tests be run, every night?
> I think for all platforms, somewhere in hs-tier3? IMO, current test set in
> hotspot/gc/epsilon is fairly complete, and it takes less than a minute on my
> 4-core i7.
>
>> Whenever we want to do large changes, like updating logging, tracing, etc,
>> will we have to take Epsilon GC into account? Will there be serviceability
>> support for Epsilon GC, like jstat, MXBeans, perf counters etc?
> I tried to address the maintenance costs in the JEP? It is unlikely to cause
> trouble, since it mostly calls into the shared code. And GC interface work would
> hopefully make BarrierSet into more shareable chunk of interface, which makes
> the whole thing even more self-contained. There is some new code in MemoryPools
> that handles the minimal diagnostics. MXBeans still work, at least ThreadMXBean
> that reports allocation pressure, although I'd need to add a test to assert that.
>
> To me, if the no-op GC requires much maintenance whenever something in JVM is
> changing, that points to the insanity of GC interface. No-op GC is a good canary
> in the coalmine for this. This is why one of the motivations is seeing what
> exactly a minimal GC should support to be functional.
>
>
>> - You quote "The experience, however, tells that many players in the
>>   Java ecosystem already did this exercise with expunging GC from their
>>   custom-built JVMs". So it seems that those users that want something
>>   like Epsilon GC are fine with building OpenJDK themselves? Having
>>   -XX:+UseEpsilonGC as a developer flag is much different compared to
>>   exposing it (and supporting, even if in experimental mode) to users.
> There is a fair share of survivorship bias: we know about people who succeeded,
> do we know how many failed or given up? I think developers who do day-to-day
> Hotspot development grossly underestimate the effort required to even build a
> custom JVM. Most power users I know have did this exercise with great pains. I
> used to sing the same song to them: just build OpenJDK yourself, but then pesky
> details pour in. Like: oh, Windows, oh, Cygwin, oh MacOS, oh XCode, oh FreeType,
> oh new compilers that build OpenJDK with warnings and build does treat warnings
> as errors, oh actual API mismatches against msvcrt, glibc, whatever, etc. etc.
> etc. As much as OpenJDK build improved over the years, I am not audacious enough
> to claim it would ever be a completely smooth experience :) Now I am just
> willingly hand them binary builds.
>
> So I think having the experimental feature available in the actual product build
> extends the feature exposure. For example, suppose you are the academic writing
> a paper on GC, would you accept custom-build JVM into your results, or would you
> rather pick up the "gold" binary build from a standard distribution and run with it?
>
>
>> I guess most of my question can be summarized as: this seems like it perhaps
>> could be useful tool for JVM GC developers, why do you want to expose the flag
>> to non-JVM developers (given all the work/support/maintenance that comes with
>> that)?
> My initial thought was that the discussion about the costs should involve
> discussing the actual code. This is why there is a complete implementation in
> the Sandbox, and also the webrev posted.
>
> In the months following my initial (crazy) experiments, I had multiple people
> coming to me and asking when Epsilon is going to be in JDK, because they want to
> use it. And those were the ultra-power-users who actually know what they are
> doing with their garbage-free applications.
>
> So the short answer about why Epsilon is good to have in product is because the
> cost seems low, the benefits are present, and so cost/benefit is still low.
>
>
>> It is _great_ that you are experimenting and trying out new ideas in the VM,
>> please continue doing that! Please don't interpret my questions/comments as
>> to grumpy, this is just my experience from maintaining 5-6 different GC
>> algorithms for more than five years that is speaking. There is _always_ a
>> maintenance cost :)
> Yeah, I know how that feels. Look at the actual Epsilon changes, do they look
> scary to you, given your experience maintaining the related code?
>
> Thanks,
> -Aleksey
>

Reply | Threaded
Open this post in threaded view
|

Re: RFC: Epsilon GC JEP

Erik Osterlund
In reply to this post by Aleksey Shipilev-4
Hi Aleksey,

If I understand this correctly, the motivation for EpsilonGC is to be
able to measure the overheads due to GC pauses and GC barriers and
measure only the application throughput without GC jitter, and then use
that as a baseline for measuring performance of an actual GC
implementation compared to EpsilonGC.

Howerver, automatic memory management is quite complicated when you
think about it. Will EpsilonGC allocate all memory up-front, or expand
the heap? In the case where it expanded on-demand until it runs out of
memory, what consequences does that potential expansion have on
throughput? In the case it is allocated upfront, will pages be
pre-touched? If so, what NUMA nodes will the pre-mapped memory map in
to? Will mutators try to allocate NUMA-local memory? What consequences
will the larger heap footprint have on the throughput because of
decreased memory locality and as a result increased last level cache
misses and suddenly having to spread to more NUMA nodes? Does the larger
footprint change the requirements on compressed oops and what
encoding/decoding of oop compression is required? In case of an
expanding heap - can it even use compressed oops? In case of a not
expanding heap allocated up-front, does a comparison of a GC using
compressed oops with a baseline that can inherently not use it make
sense? Will lack of compaction and resulting possibly worse object
locality of memory accesses affect performance?

I am not convinced that we can just remove GC-induced overheads from the
picture and measure the application throughput without the GC by using
an EpsilonGC as proposed. At least I do not think I would use it to draw
conclusions about GC-induced throughput loss. It seems like an apples to
oranges comparison to me. Or perhaps I have missed something?

Thanks,
/Erik

On 2017-07-18 13:23, Aleksey Shipilev wrote:

> Hi Erik,
>
> Thanks for looking into this!
>
> On 07/18/2017 12:09 PM, Erik Helin wrote:
>> first of all, thanks for trying this out and starting a discussion. Regarding
>> the JEP, I have a few questions/comments:
>> - the JEP specifies "last-drop performance improvements" as a
>>    motivation. However, I think you also know that taking a pause and
>>    compacting a heap that is mostly filled with garbage most likely
>>    results in higher throughput*. So are you thinking in terms of pauses
>>    here when you say performance?
> This cuts both ways: while it is true that moving GC improves locality [1], it
> is also true that the runtime overhead from barriers can be quite high [2, 3,
> 4]. So, "performance" in that section is tied to both throughput (no barriers)
> and pauses (no pauses).
>
> [1] https://shipilev.net/jvm-anatomy-park/11-moving-gc-locality
> [2] https://shipilev.net/jvm-anatomy-park/13-intergenerational-barriers
> [3] Also, remember the reason for UseCondCardMark
> [4] Also, remember the whole thing about G1 barriers
>
>> - why do you think Epsilon GC is a good baseline? IMHO, no barriers is
>>    not the perfect baseline, since it is just a theoretical exercise.
>>    Just cranking up the heap and using Serial is more realistic
>>    baseline, but even using that as a baseline is questionable.
> It sometimes is. Non-generational GC is a good baseline for some workloads. Even
> Serial does not cut it, because even if you crank up old and trim down young,
> there is no way to disable reference write barrier store that maintains card tables.
>
>> - the JEP specifies this as an experimental feature, meaning that you
>>    intend non-JVM developers to be able to run this. Have you considered
>>    the cost of supporting this option? You say "New jtreg tests under
>>    hotspot/gc/epsilon would be enough to assert correctness". For which
>>    platforms? How often should these tests be run, every night?
> I think for all platforms, somewhere in hs-tier3? IMO, current test set in
> hotspot/gc/epsilon is fairly complete, and it takes less than a minute on my
> 4-core i7.
>
>> Whenever we want to do large changes, like updating logging, tracing, etc,
>> will we have to take Epsilon GC into account? Will there be serviceability
>> support for Epsilon GC, like jstat, MXBeans, perf counters etc?
> I tried to address the maintenance costs in the JEP? It is unlikely to cause
> trouble, since it mostly calls into the shared code. And GC interface work would
> hopefully make BarrierSet into more shareable chunk of interface, which makes
> the whole thing even more self-contained. There is some new code in MemoryPools
> that handles the minimal diagnostics. MXBeans still work, at least ThreadMXBean
> that reports allocation pressure, although I'd need to add a test to assert that.
>
> To me, if the no-op GC requires much maintenance whenever something in JVM is
> changing, that points to the insanity of GC interface. No-op GC is a good canary
> in the coalmine for this. This is why one of the motivations is seeing what
> exactly a minimal GC should support to be functional.
>
>
>> - You quote "The experience, however, tells that many players in the
>>    Java ecosystem already did this exercise with expunging GC from their
>>    custom-built JVMs". So it seems that those users that want something
>>    like Epsilon GC are fine with building OpenJDK themselves? Having
>>    -XX:+UseEpsilonGC as a developer flag is much different compared to
>>    exposing it (and supporting, even if in experimental mode) to users.
> There is a fair share of survivorship bias: we know about people who succeeded,
> do we know how many failed or given up? I think developers who do day-to-day
> Hotspot development grossly underestimate the effort required to even build a
> custom JVM. Most power users I know have did this exercise with great pains. I
> used to sing the same song to them: just build OpenJDK yourself, but then pesky
> details pour in. Like: oh, Windows, oh, Cygwin, oh MacOS, oh XCode, oh FreeType,
> oh new compilers that build OpenJDK with warnings and build does treat warnings
> as errors, oh actual API mismatches against msvcrt, glibc, whatever, etc. etc.
> etc. As much as OpenJDK build improved over the years, I am not audacious enough
> to claim it would ever be a completely smooth experience :) Now I am just
> willingly hand them binary builds.
>
> So I think having the experimental feature available in the actual product build
> extends the feature exposure. For example, suppose you are the academic writing
> a paper on GC, would you accept custom-build JVM into your results, or would you
> rather pick up the "gold" binary build from a standard distribution and run with it?
>
>
>> I guess most of my question can be summarized as: this seems like it perhaps
>> could be useful tool for JVM GC developers, why do you want to expose the flag
>> to non-JVM developers (given all the work/support/maintenance that comes with
>> that)?
> My initial thought was that the discussion about the costs should involve
> discussing the actual code. This is why there is a complete implementation in
> the Sandbox, and also the webrev posted.
>
> In the months following my initial (crazy) experiments, I had multiple people
> coming to me and asking when Epsilon is going to be in JDK, because they want to
> use it. And those were the ultra-power-users who actually know what they are
> doing with their garbage-free applications.
>
> So the short answer about why Epsilon is good to have in product is because the
> cost seems low, the benefits are present, and so cost/benefit is still low.
>
>
>> It is _great_ that you are experimenting and trying out new ideas in the VM,
>> please continue doing that! Please don't interpret my questions/comments as
>> to grumpy, this is just my experience from maintaining 5-6 different GC
>> algorithms for more than five years that is speaking. There is _always_ a
>> maintenance cost :)
> Yeah, I know how that feels. Look at the actual Epsilon changes, do they look
> scary to you, given your experience maintaining the related code?
>
> Thanks,
> -Aleksey
>

Reply | Threaded
Open this post in threaded view
|

Re: RFC: Epsilon GC JEP

Aleksey Shipilev-4
In reply to this post by Erik Helin-2
On 07/18/2017 02:37 PM, Erik Helin wrote:
>> [1] https://shipilev.net/jvm-anatomy-park/11-moving-gc-locality
>> [2] https://shipilev.net/jvm-anatomy-park/13-intergenerational-barriers
>> [3] Also, remember the reason for UseCondCardMark
>> [4] Also, remember the whole thing about G1 barriers
>
> Absolutely, barriers can come with an overhead. But a barrier that consists of
> dirtying a card does not come with a quite high overhead. In fact, it comes with
> a very low overhead :)

Mhm! "Low" is in the eye of beholder. You can't beat zero overhead. And there
are people who literally count instructions on their hot paths, while still
developing in Java.

Let me ask you a trick question: how do you *know* the card mark overhead is
small, if you don't have a no-barrier GC to compare against?


>>> - why do you think Epsilon GC is a good baseline? IMHO, no barriers is
>>>   not the perfect baseline, since it is just a theoretical exercise.
>>>   Just cranking up the heap and using Serial is more realistic
>>>   baseline, but even using that as a baseline is questionable.
>>
>> It sometimes is. Non-generational GC is a good baseline for some workloads. Even
>> Serial does not cut it, because even if you crank up old and trim down young,
>> there is no way to disable reference write barrier store that maintains card
>> tables.
>
> I will still point out though that a GC without a barrier is still just a
> theoretical baseline. One could imagine a single-gen mark-compact GC for OpenJDK
> (that would require no barriers), but AFAIK almost all users prefer the slight
> overhead of dirtying a card (and in return get a generational GC) for the use
> cases where a single-gen mark-compact algorithm would be applicable.
Mark-compact, maybe. But single-gen mark-sweep algorithms are plenty, see e.g.
Go runtime. I have hard time seeing how is that theoretical.


> However, again, this might be useful for someone who wants try to do some
> changes to the JVM GC code. But that, to me, is not enough to expose it to
> non-JVM developers. It could be useful to have in the source code though, maybe
> like a --with-jvm-feature kind of thing?

That would go against the maintainability argument, no? Because you will still
have to maintain the code, *and* it will require building a special JVM flavor.
So it is a lose-lose: neither users get it, nor maintainers have simpler lives.


> [snip] Such users will still be able to get binary builds if someone is willing to
> produce them with Epsilon GC. There are plenty of OpenJDK binary builds
> available from various organizations/companies.

Well, yes. I actually happen to know the company which can distribute this in
the downstream OpenJDK builds, and reap the ultra-power-users loyalty. But, I am
maintaining that having the code upstream is beneficial, even if that company is
going to do maintenance work either way.


>> So the short answer about why Epsilon is good to have in product is because the
>> cost seems low, the benefits are present, and so cost/benefit is still low.
>
> And it is here that our opinions differ :) For you the maintenance cost is low,
> whereas for me, having yet another command-line flag, yet another code path,
> gets in the way. You have to respect that we have different background and
> experiences here.

I am not trying to challenge your background or experience here, I am
challenging the cost estimates though. Because ad absurdum, we can shoot down
any feature change coming into JVM, just because it introduces yet another flag,
yet another code path, etc.

I cannot see where the Epsilon maintenance would be a burden: it comes with
automated tests that run fast, its implementation seemss trivial, its exposure
to VM code seems trivial too (apart from the BarrierSet thing that would be
trimmed down with GC interface work).


>> Yeah, I know how that feels. Look at the actual Epsilon changes, do they look
>> scary to you, given your experience maintaining the related code?
>
> I don't like taking the role of the grumpy open source maintainer :) No, the
> code is not scary, code is rarely scary IMO, it is just code. Running tests,
> fixing that a test -Xmx1g isn't run on a RPi, having additional code paths, more
> cases to take into consideration when refactoring, is burdensome. And to me, the
> benefits of benchmarking against Epsilon vs benchmarking against Serial/Parallel
> isn't that high to me.
>
> But, I can understand that it is useful when trying to evaluate for example the
> cost of stores into a HashMap. Which is why I'm not against the code, but I'm
> not keen on exposing this to non-JVM developers.
I hear you, but thing is, Epsilon does not seem a coding exercise anymore.
Epsilon is useful for GC performance work especially when readily available, and
there are willing users to adopt it. Similarly how we respect maintainers'
burden in the product, we have to also see what benefits users, especially the
ones who are championing our project performance even by cutting corners with
e.g. no-op GCs.

Thanks,
-Aleksey


signature.asc (836 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: RFC: Epsilon GC JEP

Roman Kennke-6
In reply to this post by Erik Osterlund
At the very least, Epsilon's a great tool for measuring the cost of
barriers.

How many times have we heard the question: 'but what is the overhead of
the additional barriers of Shenandoah?' And we couldn't really answer
it. Compared to what? G1? Serial? Parallel? CMS? Each of which has their
own peculiarities when it comes to barriers.

With Epsilon it is possible to construct a benchmark that does certain
heap accesses (primitive/objects reads/writes special stuff like CASes,
etc) and do no more allocations (thus locality spread doesn't really
matter) and give an answer to those questions and say: no-barriers
throughput is this, and with that GC's barriers, we have this. etc

I realize that such results are a bit theoretical, but it gives a much
better idea than not having any way to measure this in an isolated way
at all.

Roman

Am 18.07.2017 um 15:20 schrieb Erik Österlund:

> Hi Aleksey,
>
> If I understand this correctly, the motivation for EpsilonGC is to be
> able to measure the overheads due to GC pauses and GC barriers and
> measure only the application throughput without GC jitter, and then
> use that as a baseline for measuring performance of an actual GC
> implementation compared to EpsilonGC.
>
> Howerver, automatic memory management is quite complicated when you
> think about it. Will EpsilonGC allocate all memory up-front, or expand
> the heap? In the case where it expanded on-demand until it runs out of
> memory, what consequences does that potential expansion have on
> throughput? In the case it is allocated upfront, will pages be
> pre-touched? If so, what NUMA nodes will the pre-mapped memory map in
> to? Will mutators try to allocate NUMA-local memory? What consequences
> will the larger heap footprint have on the throughput because of
> decreased memory locality and as a result increased last level cache
> misses and suddenly having to spread to more NUMA nodes? Does the
> larger footprint change the requirements on compressed oops and what
> encoding/decoding of oop compression is required? In case of an
> expanding heap - can it even use compressed oops? In case of a not
> expanding heap allocated up-front, does a comparison of a GC using
> compressed oops with a baseline that can inherently not use it make
> sense? Will lack of compaction and resulting possibly worse object
> locality of memory accesses affect performance?
>
> I am not convinced that we can just remove GC-induced overheads from
> the picture and measure the application throughput without the GC by
> using an EpsilonGC as proposed. At least I do not think I would use it
> to draw conclusions about GC-induced throughput loss. It seems like an
> apples to oranges comparison to me. Or perhaps I have missed something?
>
> Thanks,
> /Erik
>
> On 2017-07-18 13:23, Aleksey Shipilev wrote:
>> Hi Erik,
>>
>> Thanks for looking into this!
>>
>> On 07/18/2017 12:09 PM, Erik Helin wrote:
>>> first of all, thanks for trying this out and starting a discussion.
>>> Regarding
>>> the JEP, I have a few questions/comments:
>>> - the JEP specifies "last-drop performance improvements" as a
>>>    motivation. However, I think you also know that taking a pause and
>>>    compacting a heap that is mostly filled with garbage most likely
>>>    results in higher throughput*. So are you thinking in terms of
>>> pauses
>>>    here when you say performance?
>> This cuts both ways: while it is true that moving GC improves
>> locality [1], it
>> is also true that the runtime overhead from barriers can be quite
>> high [2, 3,
>> 4]. So, "performance" in that section is tied to both throughput (no
>> barriers)
>> and pauses (no pauses).
>>
>> [1] https://shipilev.net/jvm-anatomy-park/11-moving-gc-locality
>> [2] https://shipilev.net/jvm-anatomy-park/13-intergenerational-barriers
>> [3] Also, remember the reason for UseCondCardMark
>> [4] Also, remember the whole thing about G1 barriers
>>
>>> - why do you think Epsilon GC is a good baseline? IMHO, no barriers is
>>>    not the perfect baseline, since it is just a theoretical exercise.
>>>    Just cranking up the heap and using Serial is more realistic
>>>    baseline, but even using that as a baseline is questionable.
>> It sometimes is. Non-generational GC is a good baseline for some
>> workloads. Even
>> Serial does not cut it, because even if you crank up old and trim
>> down young,
>> there is no way to disable reference write barrier store that
>> maintains card tables.
>>
>>> - the JEP specifies this as an experimental feature, meaning that you
>>>    intend non-JVM developers to be able to run this. Have you
>>> considered
>>>    the cost of supporting this option? You say "New jtreg tests under
>>>    hotspot/gc/epsilon would be enough to assert correctness". For which
>>>    platforms? How often should these tests be run, every night?
>> I think for all platforms, somewhere in hs-tier3? IMO, current test
>> set in
>> hotspot/gc/epsilon is fairly complete, and it takes less than a
>> minute on my
>> 4-core i7.
>>
>>> Whenever we want to do large changes, like updating logging,
>>> tracing, etc,
>>> will we have to take Epsilon GC into account? Will there be
>>> serviceability
>>> support for Epsilon GC, like jstat, MXBeans, perf counters etc?
>> I tried to address the maintenance costs in the JEP? It is unlikely
>> to cause
>> trouble, since it mostly calls into the shared code. And GC interface
>> work would
>> hopefully make BarrierSet into more shareable chunk of interface,
>> which makes
>> the whole thing even more self-contained. There is some new code in
>> MemoryPools
>> that handles the minimal diagnostics. MXBeans still work, at least
>> ThreadMXBean
>> that reports allocation pressure, although I'd need to add a test to
>> assert that.
>>
>> To me, if the no-op GC requires much maintenance whenever something
>> in JVM is
>> changing, that points to the insanity of GC interface. No-op GC is a
>> good canary
>> in the coalmine for this. This is why one of the motivations is
>> seeing what
>> exactly a minimal GC should support to be functional.
>>
>>
>>> - You quote "The experience, however, tells that many players in the
>>>    Java ecosystem already did this exercise with expunging GC from
>>> their
>>>    custom-built JVMs". So it seems that those users that want something
>>>    like Epsilon GC are fine with building OpenJDK themselves? Having
>>>    -XX:+UseEpsilonGC as a developer flag is much different compared to
>>>    exposing it (and supporting, even if in experimental mode) to users.
>> There is a fair share of survivorship bias: we know about people who
>> succeeded,
>> do we know how many failed or given up? I think developers who do
>> day-to-day
>> Hotspot development grossly underestimate the effort required to even
>> build a
>> custom JVM. Most power users I know have did this exercise with great
>> pains. I
>> used to sing the same song to them: just build OpenJDK yourself, but
>> then pesky
>> details pour in. Like: oh, Windows, oh, Cygwin, oh MacOS, oh XCode,
>> oh FreeType,
>> oh new compilers that build OpenJDK with warnings and build does
>> treat warnings
>> as errors, oh actual API mismatches against msvcrt, glibc, whatever,
>> etc. etc.
>> etc. As much as OpenJDK build improved over the years, I am not
>> audacious enough
>> to claim it would ever be a completely smooth experience :) Now I am
>> just
>> willingly hand them binary builds.
>>
>> So I think having the experimental feature available in the actual
>> product build
>> extends the feature exposure. For example, suppose you are the
>> academic writing
>> a paper on GC, would you accept custom-build JVM into your results,
>> or would you
>> rather pick up the "gold" binary build from a standard distribution
>> and run with it?
>>
>>
>>> I guess most of my question can be summarized as: this seems like it
>>> perhaps
>>> could be useful tool for JVM GC developers, why do you want to
>>> expose the flag
>>> to non-JVM developers (given all the work/support/maintenance that
>>> comes with
>>> that)?
>> My initial thought was that the discussion about the costs should
>> involve
>> discussing the actual code. This is why there is a complete
>> implementation in
>> the Sandbox, and also the webrev posted.
>>
>> In the months following my initial (crazy) experiments, I had
>> multiple people
>> coming to me and asking when Epsilon is going to be in JDK, because
>> they want to
>> use it. And those were the ultra-power-users who actually know what
>> they are
>> doing with their garbage-free applications.
>>
>> So the short answer about why Epsilon is good to have in product is
>> because the
>> cost seems low, the benefits are present, and so cost/benefit is
>> still low.
>>
>>
>>> It is _great_ that you are experimenting and trying out new ideas in
>>> the VM,
>>> please continue doing that! Please don't interpret my
>>> questions/comments as
>>> to grumpy, this is just my experience from maintaining 5-6 different GC
>>> algorithms for more than five years that is speaking. There is
>>> _always_ a
>>> maintenance cost :)
>> Yeah, I know how that feels. Look at the actual Epsilon changes, do
>> they look
>> scary to you, given your experience maintaining the related code?
>>
>> Thanks,
>> -Aleksey
>>
>

Reply | Threaded
Open this post in threaded view
|

Re: RFC: Epsilon GC JEP

Thomas Schatzl
In reply to this post by Aleksey Shipilev-4
Hi Aleksey,

  I would like to expand this cost/benefit analysis a bit; I think the
most contentious point brought up by Erik has been the develop vs.
experimental flag issue.

For that, let me present you my understanding of the size and costs of
making this an experimental (actually product) vs. develop flag for the
intended target group as presented here.

On Tue, 2017-07-18 at 13:23 +0200, Aleksey Shipilev wrote:
> Hi Erik,
>
> Thanks for looking into this!
>
> On 07/18/2017 12:09 PM, Erik Helin wrote:
> >
> > first of all, thanks for trying this out and starting a discussion.
> > Regarding the JEP, I have a few questions/comments:
[...]

>
> > - why do you think Epsilon GC is a good baseline? IMHO, no barriers
> > is not the perfect baseline, since it is just a theoretical
> > exercise. Just cranking up the heap and using Serial is more
> > realistic   baseline, but even using that as a baseline is
> > questionable.
> It sometimes is. Non-generational GC is a good baseline for some
> workloads. Even Serial does not cut it, because even if you crank up
> old and trim down young, there is no way to disable reference write
> barrier store that maintains card tables.

Not prevented by making it a develop option.

> > - the JEP specifies this as an experimental feature, meaning that
> > you intend non-JVM developers to be able to run this. Have you
> > considered the cost of supporting this option? You say "New jtreg
> > tests under hotspot/gc/epsilon would be enough to assert
> > correctness". For which platforms? How often should these tests be
> > run, every night? 
> I think for all platforms, somewhere in hs-tier3? IMO, current test
> set in hotspot/gc/epsilon is fairly complete, and it takes less than
> a minute on my 4-core i7.

Running it daily, on X platforms on Y OSes for Z releases adds up
quickly. Could run something else instead. And there is always
something else to run on these machines, trust me. :)

> >
> > Whenever we want to do large changes, like updating logging,
> > tracing, etc, will we have to take Epsilon GC into account? Will
> > there be serviceability support for Epsilon GC, like jstat,
> > MXBeans, perf counters etc?
> I tried to address the maintenance costs in the JEP? It is unlikely
> to cause trouble, since it mostly calls into the shared code. And GC
> interface work would hopefully make BarrierSet into more shareable
> chunk of interface, which makes the whole thing even more self-
> contained. There is some new code in MemoryPools that handles the
> minimal diagnostics. MXBeans still work, at least ThreadMXBean
> that reports allocation pressure, although I'd need to add a test to
> assert that.
>
> To me, if the no-op GC requires much maintenance whenever something
> in JVM is changing, that points to the insanity of GC interface. No-
> op GC is a good canary in the coalmine for this. This is why one of
> the motivations is seeing what exactly a minimal GC should support to
> be functional.

Sanity checking of the interfaces is not prevented by a develop option.

> >
> > - You quote "The experience, however, tells that many players in
> > the Java ecosystem already did this exercise with expunging GC from
> > their custom-built JVMs". So it seems that those users that want
> > something like Epsilon GC are fine with building OpenJDK
> > themselves? Having -XX:+UseEpsilonGC as a developer flag is much
> > different compared to exposing it (and supporting, even if in
> > experimental mode) to users.
>
> There is a fair share of survivorship bias: we know about people who
> succeeded, do we know how many failed or given up? I think developers
> who do day-to-day Hotspot development grossly underestimate the
> effort required to even build a custom JVM. Most power users I know
> have did this exercise with great pains. I used to sing the same song
> to them: just build OpenJDK yourself, but then pesky details pour in.
> Like: oh, Windows, oh, Cygwin, oh MacOS, oh XCode, oh FreeType, oh
> new compilers that build OpenJDK with warnings and build does treat
> warnings as errors, oh actual API mismatches against msvcrt, glibc,
> whatever, etc. etc. etc. As much as OpenJDK build improved over the
> years, I am not audacious enough to claim it would ever be a
> completely smooth experience :) Now I am just willingly hand them
> binary builds.
>
> So I think having the experimental feature available in the actual
> product build extends the feature exposure.

I agree here.

The question is, by how much. So academics (and I am not trying to hit
on academics here, you brought them up ;)) that write a paper on GC but
never need to rebuild the VM (including the JDK here) because they
don't do any changes would be inconvenienced.

Let me ask, how many do you expect these to be? From my understanding there seems to be a very manageable yearly total GC paper output at the usual conferences. Not sure how putting Epsilon GC in product would improve that.

So, even after all these target group concerns, how much time do you think these persons writing that paper (that do not need to recompile the VM and need to show their numbers in Epsilon GC) are going to spend on getting numbers compared to the hypothetical time for compiling the VM?

[My personal experience is that when developing any changes by far most of the time is spent on waiting for the machine(s) to complete testing, not writing any actual changes or building. When writing a paper I my experience is that a very large part of the time is spent on running and re-running tests over and over again to be able to understand and explain results, or tweaking changes, or simply fixing bugs for some results]

> For example, suppose you are the academic writing a paper on GC,
> would you accept custom-build JVM into your results, or would you
> rather pick up the "gold" binary build from a standard distribution
> and run with it?

Not sure what you meant with this latter argument, if it is actually an
argument. If I wanted to effect a change in the VM and measure it, I
would already need to change and recompile the VM. So it is not a big
stretch to imagine that baselines could come from something recompiled.
I have seen quite a few papers using modified baselines for one or the
other reason (like adding necessary instrumentation, maybe fixing
obvious bugs).

From experience I know that for many reasons it is already often
basically impossible for somebody else to reproduce particular results
(without extreme effort) if not impossible. Even understanding some
baseline results may require some imagination how they were obtained.
Not even talking about reproducing them. There seems to be a very small
step from trusting results from a "gold" official binary to trusting a
slightly modified one.


As for the amount of inconvenience, I think the users that already need
to recompile for their changes are not very much inconvenienced. I.e.
changing a single "develop" to "product" seems to be a very small
effort.

> > I guess most of my question can be summarized as: this seems like
> > it perhaps could be useful tool for JVM GC developers, why do you
> > want to expose the flag to non-JVM developers (given all the
> > work/support/maintenance that comes with that)?
> My initial thought was that the discussion about the costs should
> involve discussing the actual code. This is why there is a complete
> implementation in the Sandbox, and also the webrev posted.
>
> In the months following my initial (crazy) experiments, I had
> multiple people coming to me and asking when Epsilon is going to be
> in JDK, because they want to use it. And those were the ultra-power-
> users who actually know what they are doing with their garbage-free
> applications.

Aren't ultra-power-users able to rebuild the VM? What is their cost vs.
the effort spent into making their applications garbage-free or
implementing the necessary workarounds to be able to use that gc
(mentioned load-balancer trickery etc)?

> So the short answer about why Epsilon is good to have in product is
> because the cost seems low, the benefits are present, and so
> cost/benefit is still low.

The number of people benefitting from having this available in a
product build seems to be extremely small. And so seem to be their
relative costs to fix that.

Increased exposure seems to be a real recurring cost for maintenance in
the product, although it seems relatively small compared to other
features. Still somebody has to do it.

> > It is _great_ that you are experimenting and trying out new ideas
> > in the VM, please continue doing that! Please don't interpret my
> > questions/comments as to grumpy, this is just my experience from
> > maintaining 5-6 different GC algorithms for more than five years
> > that is speaking. There is _always_ a maintenance cost :)
> Yeah, I know how that feels. Look at the actual Epsilon changes, do
> they look scary to you, given your experience maintaining the related
> code?

Well, 1500 LOC (well, ~800 without the tests) of changes do look scary
to me, whatever they do :)

Overall, on the question of develop vs. experimental option, I would
tend to prefer a develop option.
In this area there simply seem to be too many downsides compared to the
upsides for an extremely limited user group.

Thanks,
  Thomas

Reply | Threaded
Open this post in threaded view
|

Re: RFC: Epsilon GC JEP

Aleksey Shipilev-4
In reply to this post by Erik Osterlund
On 07/18/2017 03:20 PM, Erik Österlund wrote:
> If I understand this correctly, the motivation for EpsilonGC is to be able to
> measure the overheads due to GC pauses and GC barriers and measure only the
> application throughput without GC jitter, and then use that as a baseline for
> measuring performance of an actual GC implementation compared to EpsilonGC.

There are several motivations, all in "Motivation" section in JEP. Performance
work is one of them, that's right.

> Howerver, automatic memory management is quite complicated when you think about
> it.

Yes, and lots of those are handled by the shared code that Epsilon calls into,
just like any other GC.

> Will EpsilonGC allocate all memory up-front, or expand the heap? In the case
> where it expanded on-demand until it runs out of memory, what consequences does
> that potential expansion have on throughput?

It does have consequences, the same kind of consequences it has with allocating
TLABs. You can trim them down with larger TLABs, larger pages, pre-touching, all
of which are handled outside of Epsilon, by shared code.

> In the case it is allocated upfront, will pages be pre-touched?
Oh yes, there are two lines of code that also handle AlwaysPreTouch. But
otherwise it is handled by shared heap space allocation code. I would like to
see AlwaysPreTouch handled more consistently across GCs though. This is my point
from another mail: if Epsilon has to do something on its own, it is a good sign
shared GC utilities are not much of use.

> If so, what NUMA nodes will the pre-mapped memory map in to? Will mutators
> try to allocate NUMA-local memory?
I think this is handled by shared code, at least for NUMA interleaving. I would
hope that NUMA-aware allocation could be granular to TLABs, in which case it
goes into shared code too, instead of pushing to reimplement this for every GC.
If not, then Epsilon is not fully NUMA-aware.

> What consequences will the larger heap footprint have on the throughput
> because of decreased memory locality and as a result increased last level
> cache misses and suddenly having to spread to more NUMA nodes?
Yes, it would. See two paragraphs below:

> Does the larger footprint change the requirements on compressed oops and
> what encoding/decoding of oop compression is required? In case of an
> expanding heap - can it even use compressed oops? In case of a not expanding
> heap allocated up-front, does a comparison of a GC using compressed oops with
> a baseline that can inherently not use it make sense?
I guess the only relevant point here is, what happens if you need more heap than
32 GB, and then you have to disable compressed oops? In which case, of course,
you will lose. But, you have to keep in mind that the target applications that
are supposed to benefit from Epsilon are low-heap, quite probably zero-garbage.
In this case, the question about heap size is moot: you allocate enough heap to
hold your live data, whether with Epsilon or not.

> Will lack of compaction and resulting possibly worse object locality of
> memory accesses affect performance?
Yes, it would. But it cuts both ways: having more throughput *if* you code with
locality in mind. I am not against GCs that compact, but I do understand there
are cases where I don't want them either.

> I am not convinced that we can just remove GC-induced overheads from the picture
> and measure the application throughput without the GC by using an EpsilonGC as
> proposed. At least I do not think I would use it to draw conclusions about
> GC-induced throughput loss. It seems like an apples to oranges comparison to me.
> Or perhaps I have missed something?

I think this uses a strawman pointing out all other things that could go wrong,
to claim that the only thing the actual no-op GC implementation has to do (e.g.
empty BarrierSet, allocation, and responding to heap exhaustion) is not needed
either :)

Thanks,
-Aleksey


signature.asc (836 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: RFC: Epsilon GC JEP

Aleksey Shipilev-4
In reply to this post by Thomas Schatzl
Hi Thomas,

(reading the rest a bit later)

On 07/18/2017 03:34 PM, Thomas Schatzl wrote:
> I would like to expand this cost/benefit analysis a bit; I think the
> most contentious point brought up by Erik has been the develop vs.
> experimental flag issue.

> For that, let me present you my understanding of the size and costs of
> making this an experimental (actually product) vs. develop flag for the
> intended target group as presented here.

> Overall, on the question of develop vs. experimental option, I would tend to
> prefer a develop option. In this area there simply seem to be too many
> downsides compared to the upsides for an extremely limited user group.
Ok, suppose we want to hide it from most users. Now we need an option that is
available in release builds (because you want to test native GC performance),
but not openly available in release builds. Which option type is that? I thought
"experimental" is closest to that.

Thanks,
-Aleksey


signature.asc (836 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: RFC: Epsilon GC JEP

Aleksey Shipilev-4
In reply to this post by Thomas Schatzl
(I have read the rest)

Okay, you have convinced me, maintainers do not want to have it exposed as
experimental option. Would you be willing to accept it as develop then?

Other random ramblings:

On 07/18/2017 03:34 PM, Thomas Schatzl wrote:
> Running it daily, on X platforms on Y OSes for Z releases adds up
> quickly. Could run something else instead. And there is always
> something else to run on these machines, trust me. :)

Right. Well, I have recently authored a few changes [1,2] that made Shenandoah
GC tests run around 20% faster in fastdebug. I suppose some of that improvement
is applicable to other GCs too. My question is, can I please have 1 minute of
that machine time per build back as payment? :D

[1] http://hg.openjdk.java.net/jdk10/hs/hotspot/rev/f922d99ce776
[2] http://hg.openjdk.java.net/jdk10/hs/hotspot/rev/9fe3d41b0e51

> The question is, by how much. So academics (and I am not trying to hit
> on academics here, you brought them up ;)) that write a paper on GC but
> never need to rebuild the VM (including the JDK here) because they
> don't do any changes would be inconvenienced.
>
> Let me ask, how many do you expect these to be? From my understanding there
> seems to be a very manageable yearly total GC paper output at the usual
> conferences. Not sure how putting Epsilon GC in product would improve that.

"Build it and they will come" works here. "develop" is seen as unstable and
untouchable by most.

> As for the amount of inconvenience, I think the users that already need
> to recompile for their changes are not very much inconvenienced. I.e.
> changing a single "develop" to "product" seems to be a very small
> effort.

Okay, we can do this downstream.

> Aren't ultra-power-users able to rebuild the VM? What is their cost vs.
> the effort spent into making their applications garbage-free or
> implementing the necessary workarounds to be able to use that gc
> (mentioned load-balancer trickery etc)?

I am pretty sure they would be much, much, much happier to download the
Oracle/RedHat/Azul's binary build and run with it in production, thus
capitalizing on all the testing those companies did for their JDK binaries.
Native compilers and native toolchains are the bottomless sources of bugs too,
right?


Thanks,
-Aleksey


signature.asc (836 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: RFC: Epsilon GC JEP

Erik Helin-2
In reply to this post by Aleksey Shipilev-4
On 07/18/2017 03:26 PM, Aleksey Shipilev wrote:

> On 07/18/2017 02:37 PM, Erik Helin wrote:
>>> [1] https://shipilev.net/jvm-anatomy-park/11-moving-gc-locality
>>> [2] https://shipilev.net/jvm-anatomy-park/13-intergenerational-barriers
>>> [3] Also, remember the reason for UseCondCardMark
>>> [4] Also, remember the whole thing about G1 barriers
>>
>> Absolutely, barriers can come with an overhead. But a barrier that consists of
>> dirtying a card does not come with a quite high overhead. In fact, it comes with
>> a very low overhead :)
>
> Mhm! "Low" is in the eye of beholder. You can't beat zero overhead. And there
> are people who literally count instructions on their hot paths, while still
> developing in Java.
>
> Let me ask you a trick question: how do you *know* the card mark overhead is
> small, if you don't have a no-barrier GC to compare against?

There is no need for trick questions. Aleksey, we are working towards
the same goal: making OpenJDK's GCs better. That doesn't mean we can't
have different opinions on a few topics.

You of course know the cost a GC barrier by measuring it. You measure it
by constructing a build where you do not emit the barriers and compare
it to a build where you do. Again, I have already said that I can see
your work being useful for other JVM developers.

>>>> - why do you think Epsilon GC is a good baseline? IMHO, no barriers is
>>>>   not the perfect baseline, since it is just a theoretical exercise.
>>>>   Just cranking up the heap and using Serial is more realistic
>>>>   baseline, but even using that as a baseline is questionable.
>>>
>>> It sometimes is. Non-generational GC is a good baseline for some workloads. Even
>>> Serial does not cut it, because even if you crank up old and trim down young,
>>> there is no way to disable reference write barrier store that maintains card
>>> tables.
>>
>> I will still point out though that a GC without a barrier is still just a
>> theoretical baseline. One could imagine a single-gen mark-compact GC for OpenJDK
>> (that would require no barriers), but AFAIK almost all users prefer the slight
>> overhead of dirtying a card (and in return get a generational GC) for the use
>> cases where a single-gen mark-compact algorithm would be applicable.
>
> Mark-compact, maybe. But single-gen mark-sweep algorithms are plenty, see e.g.
> Go runtime. I have hard time seeing how is that theoretical.

That is not what I said. As I wrote above:

 > but AFAIK almost all users prefer the slight
 > overhead of dirtying a card (and in return get a generational GC) for
 > the use cases where a single-gen mark-compact algorithm would be
 > applicable.

There are of course use cases for single-gen mark-sweep algorithms, and
as I write above, for single-gen mark-compact algorithms as well. But
for Java, and OpenJDK, at least it is my understanding that most users
prefer a generational algorithm like Serial compared to a single-gen
mark-compact algorithm (at least I have not seen a lot of users asking
for that). But maybe I'm missing something here?

This is why I wrote, and still think, that a GC without a barrier for
Java seems more like a theoretical baseline. There are of course single
generational GC algorithms that uses a barrier that it would be very
interesting to see implemented in OpenJDK (including the great work that
you and others are doing with Shenandoah).

>> However, again, this might be useful for someone who wants try to do some
>> changes to the JVM GC code. But that, to me, is not enough to expose it to
>> non-JVM developers. It could be useful to have in the source code though, maybe
>> like a --with-jvm-feature kind of thing?
>
> That would go against the maintainability argument, no? Because you will still
> have to maintain the code, *and* it will require building a special JVM flavor.
> So it is a lose-lose: neither users get it, nor maintainers have simpler lives.

No, I don't view it that way. Having the code in the upstream repository
and having it exposed in binary builds are two very different things to
me, and comes with very different requirements in terms of maintenance.
If the code is in the upstream repository, then it is a tool for
developers working in OpenJDK and for integrators building OpenJDK. We
have a much easier time changing such code compared to code that users
have come to rely on (and expect certain behavior from).

>> [snip] Such users will still be able to get binary builds if someone is willing to
>> produce them with Epsilon GC. There are plenty of OpenJDK binary builds
>> available from various organizations/companies.
>
> Well, yes. I actually happen to know the company which can distribute this in
> the downstream OpenJDK builds, and reap the ultra-power-users loyalty. But, I am
> maintaining that having the code upstream is beneficial, even if that company is
> going to do maintenance work either way.
>
>
>>> So the short answer about why Epsilon is good to have in product is because the
>>> cost seems low, the benefits are present, and so cost/benefit is still low.
>>
>> And it is here that our opinions differ :) For you the maintenance cost is low,
>> whereas for me, having yet another command-line flag, yet another code path,
>> gets in the way. You have to respect that we have different background and
>> experiences here.
>
> I am not trying to challenge your background or experience here, I am
> challenging the cost estimates though. Because ad absurdum, we can shoot down
> any feature change coming into JVM, just because it introduces yet another flag,
> yet another code path, etc.

Do you see me doing that? I at least hope I am welcoming to everyone
that wants to contribute a patch to OpenJDK, big or small (please let me
know otherwise).

> I cannot see where the Epsilon maintenance would be a burden: it comes with
> automated tests that run fast, its implementation seemss trivial, its exposure
> to VM code seems trivial too (apart from the BarrierSet thing that would be
> trimmed down with GC interface work).

And from my experience there is always maintenance work (documentation,
support, testing matrix increase, etc) with supporting a new kind of
collector. You and I just do a different cost/benefit analysis on
exposing this behavior to non-JVM developers.

>>> Yeah, I know how that feels. Look at the actual Epsilon changes, do they look
>>> scary to you, given your experience maintaining the related code?
>>
>> I don't like taking the role of the grumpy open source maintainer :) No, the
>> code is not scary, code is rarely scary IMO, it is just code. Running tests,
>> fixing that a test -Xmx1g isn't run on a RPi, having additional code paths, more
>> cases to take into consideration when refactoring, is burdensome. And to me, the
>> benefits of benchmarking against Epsilon vs benchmarking against Serial/Parallel
>> isn't that high to me.
>>
>> But, I can understand that it is useful when trying to evaluate for example the
>> cost of stores into a HashMap. Which is why I'm not against the code, but I'm
>> not keen on exposing this to non-JVM developers.
>
> I hear you, but thing is, Epsilon does not seem a coding exercise anymore.
> Epsilon is useful for GC performance work especially when readily available, and
> there are willing users to adopt it. Similarly how we respect maintainers'
> burden in the product, we have to also see what benefits users, especially the
> ones who are championing our project performance even by cutting corners with
> e.g. no-op GCs.

Yes, you always have to weigh the benefits against the costs, and in
this case, exposing Epsilon GC to non-JVM developers seems, at least for
now and to me, taht the benefits do not outweigh the costs. Who knows,
maybe this will change and we redo the cost/benefit analysis? It is very
easy to go from developer flag to experimental flag, it is way, way
harder to go from experimental flag to developer flag.

Thanks,
Erik

> Thanks,
> -Aleksey
>
Reply | Threaded
Open this post in threaded view
|

Re: RFC: Epsilon GC JEP

Aleksey Shipilev-4
Hi Erik,

I think we are coming to a consensus here.

Piece-wise:

On 07/18/2017 05:22 PM, Erik Helin wrote:

> That is not what I said. As I wrote above:
>
>> but AFAIK almost all users prefer the slight
>> overhead of dirtying a card (and in return get a generational GC) for
>> the use cases where a single-gen mark-compact algorithm would be
>> applicable.
>
> There are of course use cases for single-gen mark-sweep algorithms, and as I
> write above, for single-gen mark-compact algorithms as well. But for Java, and
> OpenJDK, at least it is my understanding that most users prefer a generational
> algorithm like Serial compared to a single-gen mark-compact algorithm (at least
> I have not seen a lot of users asking for that). But maybe I'm missing something
> here?
Mmm, "prefer" is not the same as "have no other option than trust JVM developers
that generational is better for their workloads, and having no energy to try to
build the collector proving otherwise". Because there is no no collector in
OpenJDK that avoids generational barriers. Saying "prefer" here is very very odd.

> No, I don't view it that way. Having the code in the upstream repository and
> having it exposed in binary builds are two very different things to me, and
> comes with very different requirements in terms of maintenance. If the code is
> in the upstream repository, then it is a tool for developers working in OpenJDK
> and for integrators building OpenJDK. We have a much easier time changing such
> code compared to code that users have come to rely on (and expect certain
> behavior from).

Okay. I am still quite a bit puzzled why "experimental" comes with any notion of
supportability, compatibility, testing coverage, etc. I don't think most of
current experimental options declared in globals.hpp come with that in mind. In
fact, many are even marked with "(Unsafe) (Unstable)"...


>> I hear you, but thing is, Epsilon does not seem a coding exercise anymore.
>> Epsilon is useful for GC performance work especially when readily available, and
>> there are willing users to adopt it. Similarly how we respect maintainers'
>> burden in the product, we have to also see what benefits users, especially the
>> ones who are championing our project performance even by cutting corners with
>> e.g. no-op GCs.
>
> Yes, you always have to weigh the benefits against the costs, and in this case,
> exposing Epsilon GC to non-JVM developers seems, at least for now and to me,
> taht the benefits do not outweigh the costs. Who knows, maybe this will change
> and we redo the cost/benefit analysis? It is very easy to go from developer flag
> to experimental flag, it is way, way harder to go from experimental flag to
> developer flag.
Okay, that sounds like a compromise to me: push Epsilon under "develop" flag,
and then ask users or downstreams to switch it to "product" if they want. This
is not ideal, but it works. Does that resolve your concerns?

Thanks,
-Aleksey


signature.asc (836 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: RFC: Epsilon GC JEP

Erik Helin-2
On 07/18/2017 05:41 PM, Aleksey Shipilev wrote:

>> Yes, you always have to weigh the benefits against the costs, and in this case,
>> exposing Epsilon GC to non-JVM developers seems, at least for now and to me,
>> taht the benefits do not outweigh the costs. Who knows, maybe this will change
>> and we redo the cost/benefit analysis? It is very easy to go from developer flag
>> to experimental flag, it is way, way harder to go from experimental flag to
>> developer flag.
>
> Okay, that sounds like a compromise to me: push Epsilon under "develop" flag,
> and then ask users or downstreams to switch it to "product" if they want. This
> is not ideal, but it works. Does that resolve your concerns?

Yep, I would prefer it to be a develop flag. Will you update the JEP to
reflect this?

Thanks,
Erik

> Thanks,
> -Aleksey
>
Reply | Threaded
Open this post in threaded view
|

Re: RFC: Epsilon GC JEP

Thomas Schatzl
In reply to this post by Aleksey Shipilev-4
Hi Aleksey,

On Tue, 2017-07-18 at 17:41 +0200, Aleksey Shipilev wrote:

> Hi Erik,
>
> I think we are coming to a consensus here.
>
> Piece-wise:
>
> On 07/18/2017 05:22 PM, Erik Helin wrote:
> >
> > No, I don't view it that way. Having the code in the upstream
> > repository and having it exposed in binary builds are two very
> > different things to me, and comes with very different requirements
> > in terms of maintenance. If the code is in the upstream repository,
> > then it is a tool for developers working in OpenJDK and for
> > integrators building OpenJDK. We have a much easier time changing
> > such code compared to code that users have come to rely on (and
> > expect certain behavior from).
>
> Okay. I am still quite a bit puzzled why "experimental" comes with
> any notion of supportability, compatibility, testing coverage, etc. 

Every option that is exposed to the user in the product build is part
of the public API, and so must be supported similar to other options.
An experimental option is just another "official" interface to the user
as described by the CSR wiki page [1].

Just consider this: a security issue in an experimental option is just
as much a security issue in the product as any other. Since we do not
want to wait that to happen, it needs the same support and testing as
any other.

Experimental options are (at least in the GC group) more obscure
options that help you shoot yourselves into your foot performance wise
better if you fiddle too much with them :)
So the use -XX:+UseExperimentalVMOptions is more an acknowledgment for
you that you are really sure you want to do that.

They may be still required for some users for application (what we
think are) corner cases that are not (yet?) handled well automatically
by the VM. Or as alternatives for other product options that only apply
to e.g. a single collector. Or just mislabelled as such.

> I don't think most of current experimental options declared in
> globals.hpp come with that in mind. In fact, many are even marked 
> with "(Unsafe) (Unstable)"...

The VM is a very old project, from before when terms like "unit
testing", "code coverage" and related were a thing. Around 28 of those
remaining out of 1729 in globals.hpp does not sound too bad. Could be
better of course (also the actual number of switches ;)).

Also I am not sure whether they are actually unsafe and unstable any
more.

Thanks,
  Thomas

[1] https://wiki.openjdk.java.net/display/csr/ ; there is a more
detailed, likely provisional guide [2] covering options a bit more.
[2] http://cr.openjdk.java.net/~darcy/OpenJdkDevGuide/OpenJdkDevelopers
Guide.v0.777.html#kinds_of_interfaces

Reply | Threaded
Open this post in threaded view
|

Re: RFC: Epsilon GC JEP

Aleksey Shipilev-4
On 07/19/2017 11:27 AM, Thomas Schatzl wrote:

>> Okay. I am still quite a bit puzzled why "experimental" comes with
>> any notion of supportability, compatibility, testing coverage, etc.
>
> Every option that is exposed to the user in the product build is part
> of the public API, and so must be supported similar to other options.
> An experimental option is just another "official" interface to the user
> as described by the CSR wiki page [1].
>
> Just consider this: a security issue in an experimental option is just
> as much a security issue in the product as any other. Since we do not
> want to wait that to happen, it needs the same support and testing as
> any other.
But, but... the definition in globals.hpp:

// experimental flags are in support of features that ***are not
// part of the officially supported product***, but are available
// for experimenting with. They could, for example, be performance
// features that ***may not have undergone full or rigorous QA***, but which may
// help performance in some cases and released for experimentation
// by the community of users and developers. This flag also allows one to
// be able to build a fully supported product that nonetheless also
// ships with some ***unsupported, lightly tested***, experimental features.
// Like the UnlockDiagnosticVMOptions flag above, there is a corresponding
// UnlockExperimentalVMOptions flag, which allows the control and
// modification of the experimental flags.

(emphasis mine)

Are you saying that GC group makes that definition stronger by saying
experimental flags are like product functional-stability-wise, but not
performance-wise? So, that means GC group runs the functional testing with every
combination of experimental options?

Thanks,
-Aleksey


signature.asc (836 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: RFC: Epsilon GC JEP

Aleksey Shipilev-4
In reply to this post by Erik Helin-2
On 07/19/2017 11:17 AM, Erik Helin wrote:

> On 07/18/2017 05:41 PM, Aleksey Shipilev wrote:
>>> Yes, you always have to weigh the benefits against the costs, and in this case,
>>> exposing Epsilon GC to non-JVM developers seems, at least for now and to me,
>>> taht the benefits do not outweigh the costs. Who knows, maybe this will change
>>> and we redo the cost/benefit analysis? It is very easy to go from developer flag
>>> to experimental flag, it is way, way harder to go from experimental flag to
>>> developer flag.
>>
>> Okay, that sounds like a compromise to me: push Epsilon under "develop" flag,
>> and then ask users or downstreams to switch it to "product" if they want. This
>> is not ideal, but it works. Does that resolve your concerns?
>
> Yep, I would prefer it to be a develop flag. Will you update the JEP to reflect
> this?
Updated.

Better yet, the implementation is updated to make Epsilon 'develop'. Which
required some trickery to make the tests pass with release builds, and survive
changing the flag back to 'product' or 'experimental' without omitting the
tests. Also, my build servers now patch Epsilon builds back to 'experimental'.
<much-maintainability-wow-doge.jpg>

Cheers,
-Aleksey


signature.asc (836 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: RFC: Epsilon GC JEP

Aleksey Shipilev-4
In reply to this post by Aleksey Shipilev-4
On 07/10/2017 10:14 PM, Aleksey Shipilev wrote:
> I would like to solicit feedback on Epsilon GC JEP:
>   https://bugs.openjdk.java.net/browse/JDK-8174901
>   http://openjdk.java.net/jeps/8174901

Following up on this after the discussion, please add yourself to Reviewed-by (or Endorsed-by, if
you are the group lead) in JEP!

Erik Helin, Roman Kennke, Thomas Schatzl, and Erik Osterlund replied in this thread. More reviews
and endorsements are welcome.

Thanks,
-Aleksey




signature.asc (836 bytes) Download Attachment
12