Quantcast

Low-Overhead Heap Profiling

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Low-Overhead Heap Profiling

JC Beyler
Hello all,

This is a follow-up from Jeremy's initial email from last year:

I've gone ahead and started working on preparing this and Jeremy and I went down the route of actually writing it up in JEP form:

I think original conversation that happened last year in that thread still holds true:

 - We have a patch at Google that we think others might be interested in
    - It provides a means to understand where the allocation hotspots are at a very low overhead
    - Since it is at a low overhead, we can leave it on by default

So I come to the mailing list with Jeremy's initial question: 
"I thought I would ask if there is any interest / if I should write a JEP / if I should just forget it."

A year ago, it seemed some thought it was a good idea, is this still true?

Thanks,
Jc


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Low-Overhead Heap Profiling

JC Beyler
Hi all,

To move the discussion forward, with Chuck Rasbold's help to make a webrev, we pushed this:
415 lines changed: 399 ins; 13 del; 3 mod; 51122 unchg

This is not a final change that does the whole proposition from the JBS entry: https://bugs.openjdk.java.net/browse/JDK-8177374; what it does show is parts of the implementation that is proposed and hopefully can start the conversation going as I work through the details.

For example, the changes to C2 are done here for the allocations: http://cr.openjdk.java.net/~rasbold/heapz/webrev.00/src/share/vm/opto/macro.cpp.patch

Hopefully this all makes sense and thank you for all your future comments!
Jc


On Tue, Dec 13, 2016 at 1:11 PM, JC Beyler <[hidden email]> wrote:
Hello all,

This is a follow-up from Jeremy's initial email from last year:

I've gone ahead and started working on preparing this and Jeremy and I went down the route of actually writing it up in JEP form:

I think original conversation that happened last year in that thread still holds true:

 - We have a patch at Google that we think others might be interested in
    - It provides a means to understand where the allocation hotspots are at a very low overhead
    - Since it is at a low overhead, we can leave it on by default

So I come to the mailing list with Jeremy's initial question: 
"I thought I would ask if there is any interest / if I should write a JEP / if I should just forget it."

A year ago, it seemed some thought it was a good idea, is this still true?

Thanks,
Jc



Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Low-Overhead Heap Profiling

JC Beyler
Hi all,

I worked on getting a few numbers for overhead and accuracy for my feature. I'm unsure if here is the right place to provide the full data, so I am just summarizing here for now.

- Overhead of the feature

Using the Dacapo benchmark (http://dacapobench.org/). My initial results are that sampling provides 2.4% with a 512k sampling, 512k being our default setting.

- Note: this was without the tradesoap, tradebeans and tomcat benchmarks since they did not work with my JDK9 (issue between Dacapo and JDK9 it seems)
- I want to rerun next week to ensure number stability

- Accuracy of the feature

I wrote a small microbenchmark that allocates from two different stacktraces at a given ratio. For example, 10% of stacktrace S1 and 90% from stacktrace S2. The microbenchmark was run 20 times, I averaged the results and looked for accuracy. It seems that statistically it is sound since if I allocated10% S1 and 90% S2, with a sampling rate of 512k, I obtained 9.61% S1 and 90.49% S2.

Let me know if there are any questions on the numbers and if you'd like to see some more data.

Note: this was done using our internal JDK8 implementation since the webrev provided by http://cr.openjdk.java.net/~rasbold/heapz/webrev.00/index.html does not yet contain the whole implementation and therefore would have been misleading.

Thanks,
Jc


On Tue, Apr 4, 2017 at 3:55 PM, JC Beyler <[hidden email]> wrote:
Hi all,

To move the discussion forward, with Chuck Rasbold's help to make a webrev, we pushed this:
415 lines changed: 399 ins; 13 del; 3 mod; 51122 unchg

This is not a final change that does the whole proposition from the JBS entry: https://bugs.openjdk.java.net/browse/JDK-8177374; what it does show is parts of the implementation that is proposed and hopefully can start the conversation going as I work through the details.

For example, the changes to C2 are done here for the allocations: http://cr.openjdk.java.net/~rasbold/heapz/webrev.00/src/share/vm/opto/macro.cpp.patch

Hopefully this all makes sense and thank you for all your future comments!
Jc


On Tue, Dec 13, 2016 at 1:11 PM, JC Beyler <[hidden email]> wrote:
Hello all,

This is a follow-up from Jeremy's initial email from last year:

I've gone ahead and started working on preparing this and Jeremy and I went down the route of actually writing it up in JEP form:

I think original conversation that happened last year in that thread still holds true:

 - We have a patch at Google that we think others might be interested in
    - It provides a means to understand where the allocation hotspots are at a very low overhead
    - Since it is at a low overhead, we can leave it on by default

So I come to the mailing list with Jeremy's initial question: 
"I thought I would ask if there is any interest / if I should write a JEP / if I should just forget it."

A year ago, it seemed some thought it was a good idea, is this still true?

Thanks,
Jc




Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Low-Overhead Heap Profiling

JC Beyler
Hi all,

I've added size information to the allocation sampling system. This allows the callback to remember the size of each sampled allocation.

The new webrev.01 also adds the actual heap monitoring sampling system in files:
and

My next step is to add the GC part to the webrev, which will allow users to determine what objects are live and what are garbage. 

Thanks for your attention and let me know if there are any questions!

Have a wonderful Friday!
Jc

On Mon, Apr 17, 2017 at 12:37 PM, JC Beyler <[hidden email]> wrote:
Hi all,

I worked on getting a few numbers for overhead and accuracy for my feature. I'm unsure if here is the right place to provide the full data, so I am just summarizing here for now.

- Overhead of the feature

Using the Dacapo benchmark (http://dacapobench.org/). My initial results are that sampling provides 2.4% with a 512k sampling, 512k being our default setting.

- Note: this was without the tradesoap, tradebeans and tomcat benchmarks since they did not work with my JDK9 (issue between Dacapo and JDK9 it seems)
- I want to rerun next week to ensure number stability

- Accuracy of the feature

I wrote a small microbenchmark that allocates from two different stacktraces at a given ratio. For example, 10% of stacktrace S1 and 90% from stacktrace S2. The microbenchmark was run 20 times, I averaged the results and looked for accuracy. It seems that statistically it is sound since if I allocated10% S1 and 90% S2, with a sampling rate of 512k, I obtained 9.61% S1 and 90.49% S2.

Let me know if there are any questions on the numbers and if you'd like to see some more data.

Note: this was done using our internal JDK8 implementation since the webrev provided by http://cr.openjdk.java.net/~rasbold/heapz/webrev.00/index.html does not yet contain the whole implementation and therefore would have been misleading.

Thanks,
Jc


On Tue, Apr 4, 2017 at 3:55 PM, JC Beyler <[hidden email]> wrote:
Hi all,

To move the discussion forward, with Chuck Rasbold's help to make a webrev, we pushed this:
415 lines changed: 399 ins; 13 del; 3 mod; 51122 unchg

This is not a final change that does the whole proposition from the JBS entry: https://bugs.openjdk.java.net/browse/JDK-8177374; what it does show is parts of the implementation that is proposed and hopefully can start the conversation going as I work through the details.

For example, the changes to C2 are done here for the allocations: http://cr.openjdk.java.net/~rasbold/heapz/webrev.00/src/share/vm/opto/macro.cpp.patch

Hopefully this all makes sense and thank you for all your future comments!
Jc


On Tue, Dec 13, 2016 at 1:11 PM, JC Beyler <[hidden email]> wrote:
Hello all,

This is a follow-up from Jeremy's initial email from last year:

I've gone ahead and started working on preparing this and Jeremy and I went down the route of actually writing it up in JEP form:

I think original conversation that happened last year in that thread still holds true:

 - We have a patch at Google that we think others might be interested in
    - It provides a means to understand where the allocation hotspots are at a very low overhead
    - Since it is at a low overhead, we can leave it on by default

So I come to the mailing list with Jeremy's initial question: 
"I thought I would ask if there is any interest / if I should write a JEP / if I should just forget it."

A year ago, it seemed some thought it was a good idea, is this still true?

Thanks,
Jc





Loading...