Fork/Join and Parallel Calamity? Or storm in a teacup?

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

Fork/Join and Parallel Calamity? Or storm in a teacup?

Dr Heinz M. Kabutz
Hi fellow concurrency interest members!

A friend forwarded me this article:

http://coopsoft.com/ar/Calamity2Article.html

I'm always wary of those who have a product to flog and who bash the
JDK.  I'd be interested to hear your opinion on this writing though.

Regards

Heinz
--
Dr Heinz M. Kabutz (PhD CompSci)
Author of "The Java(tm) Specialists' Newsletter"
Sun/Oracle Java Champion since 2005
JavaOne Rock Star Speaker 2012
http://www.javaspecialists.eu
Tel: +30 69 75 595 262
Skype: kabutz

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Fork/Join and Parallel Calamity? Or storm in a teacup?

Doug Lea
On 04/03/2015 08:13 AM, Dr Heinz M. Kabutz wrote:
> Hi fellow concurrency interest members!
>
> A friend forwarded me this article:
>
> http://coopsoft.com/ar/Calamity2Article.html
>
> I'm always wary of those who have a product to flog and who bash the JDK.  I'd
> be interested to hear your opinion on this writing though.
>

[A brief escape from near-quarantine for the past two and
next month dealing with our five-year dept program review
and other local stuff.]

My overall take reading this and Ed Harned's other posts is that
they don't reflect historical causality: Work-stealing frameworks
(FJ and those in other languages/platforms) were initially
targeted to classic divide-and-conquer computational workloads. But users
soon discovered that if they ignore scope disclaimers, FJ can work
well in other use cases (because of low contention, high utilization,
etc). Rather than telling people to stop doing this, we've
continually evolved internals from "can work well" to
"typically work well", across an increasing range of use cases
(which keep expanding). I think that Ed's argument is that
we should not have done this,  but instead provided some other
different Executor framework, or left it for third-party providers.

-Doug



_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Fork/Join and Parallel Calamity? Or storm in a teacup?

Aleksey Shipilev-2
I'd just short-cut the entire thread by referencing the Java Posse
thread from four years ago:
  https://groups.google.com/forum/#!topic/javaposse/Rfp7t23lQTo

On 04/03/2015 04:07 PM, Doug Lea wrote:

> [A brief escape from near-quarantine for the past two and
> next month dealing with our five-year dept program review
> and other local stuff.]
>
> My overall take reading this and Ed Harned's other posts is that
> they don't reflect historical causality: Work-stealing frameworks
> (FJ and those in other languages/platforms) were initially
> targeted to classic divide-and-conquer computational workloads. But users
> soon discovered that if they ignore scope disclaimers, FJ can work
> well in other use cases (because of low contention, high utilization,
> etc). Rather than telling people to stop doing this, we've
> continually evolved internals from "can work well" to
> "typically work well", across an increasing range of use cases
> (which keep expanding). I think that Ed's argument is that
> we should not have done this,  but instead provided some other
> different Executor framework, or left it for third-party providers.
I think that sums it up nicely.

>  I'd be interested to hear your opinion on this writing though.

Way too often I find people having a pipe dream of "having something
better", before they actually start to build, support and maintain the
better thing. Only to realize, in way too many cases, the status quo is
actually (close to) Pareto efficient, and improving something is much
less obvious than you imagined. (God, I sound old!).

Now, it's completely understandable not everyone has cycles to get their
hands dirty with development. Suggestions and constructive critique are
always welcome. However, when you are suggesting something, you'd better
make sure your suggestion is doable, viable, practical -- or at least,
have a fair amount of skepticism in it, asking others to poke holes.

In other words, in addition to describing how a particular
implementation "fails" your arbitrary set of expectations, explain, in
great detail, how the alternative should improve what you care about and
why it should not regress what you personally don't care about.
Ultimately, have a working code that clearly demonstrates the benefits
of the alternative.

Now go back and read "What is the answer for success?" section in the
article. This is why I believe the article is just bashing, and does not
deserve an attention it gets. Speculation: the article is inflammatory
to draw attention for the sake of attention. Don't take that bait.

-Aleksey.


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest

signature.asc (836 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Fork/Join and Parallel Calamity? Or storm in a teacup?

Edward Harned-2
In reply to this post by Dr Heinz M. Kabutz
I do not have a product to flog. In 2010 I submitted a proof-of-concept to the good professor showing that scatter-gather works just as well as what he was proposing for Java7. Since he ignored the proof, I took the parallel engine out of a Task Parallel product (open-source) I maintain and put in the Data Parallel engine. This product is also open-source. It is not suitable for an API since it is a full feature Data Parallel product.

I have never bashed the JDK. Certain features do not belong in the core product. A multi/threading/tasking framework belongs outside the core product. That is all I have said.

The points I made in all three articles are accurate.

ed

On Fri, Apr 3, 2015 at 8:13 AM, Dr Heinz M. Kabutz <[hidden email]> wrote:
Hi fellow concurrency interest members!

A friend forwarded me this article:

http://coopsoft.com/ar/Calamity2Article.html

I'm always wary of those who have a product to flog and who bash the JDK.  I'd be interested to hear your opinion on this writing though.

Regards

Heinz
--
Dr Heinz M. Kabutz (PhD CompSci)
Author of "The Java(tm) Specialists' Newsletter"
Sun/Oracle Java Champion since 2005
JavaOne Rock Star Speaker 2012
http://www.javaspecialists.eu
Tel: +30 69 75 595 262
Skype: kabutz

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Fork/Join and Parallel Calamity? Or storm in a teacup?

Dr Heinz M. Kabutz
In reply to this post by Aleksey Shipilev-2
Thanks for the pointer, Aleksey - I forgot about that discussion on JavaPosse.  Even though I did not contribute to the discussion, I'm pretty sure that at some point in my life I read it.

I think the point that Ed was missing at the time was the look forward to parallel streams, which to me was the main reason for having F/J in the first place.  That said, in all my experiments, F/J has performed admirably as long as I coded it correctly.  Some simple rules like making sure that the tasks have a threshold below which they get executed sequentially.  Usual parallel guidelines that apply in other situations as well.  With the advent of parallel stream and Spliterators, I don't see much reason to use Fork/Join directly anymore.

The one thing that is difficult is to monitor what is happening inside the ForkJoinPool.  Kirk tried and found it rather challenging.
Regards

Heinz
-- 
Dr Heinz M. Kabutz (PhD CompSci)
Author of "The Java(tm) Specialists' Newsletter"
Sun/Oracle Java Champion since 2005
JavaOne Rock Star Speaker 2012
http://www.javaspecialists.eu
Tel: +30 69 75 595 262
Skype: kabutz


Aleksey Shipilev wrote:
I'd just short-cut the entire thread by referencing the Java Posse
thread from four years ago:
  https://groups.google.com/forum/#!topic/javaposse/Rfp7t23lQTo

On 04/03/2015 04:07 PM, Doug Lea wrote:
  
[A brief escape from near-quarantine for the past two and
next month dealing with our five-year dept program review
and other local stuff.]

My overall take reading this and Ed Harned's other posts is that
they don't reflect historical causality: Work-stealing frameworks
(FJ and those in other languages/platforms) were initially
targeted to classic divide-and-conquer computational workloads. But users
soon discovered that if they ignore scope disclaimers, FJ can work
well in other use cases (because of low contention, high utilization,
etc). Rather than telling people to stop doing this, we've
continually evolved internals from "can work well" to
"typically work well", across an increasing range of use cases
(which keep expanding). I think that Ed's argument is that
we should not have done this,  but instead provided some other
different Executor framework, or left it for third-party providers.
    

I think that sums it up nicely.

  
 I'd be interested to hear your opinion on this writing though.
    

Way too often I find people having a pipe dream of "having something
better", before they actually start to build, support and maintain the
better thing. Only to realize, in way too many cases, the status quo is
actually (close to) Pareto efficient, and improving something is much
less obvious than you imagined. (God, I sound old!).

Now, it's completely understandable not everyone has cycles to get their
hands dirty with development. Suggestions and constructive critique are
always welcome. However, when you are suggesting something, you'd better
make sure your suggestion is doable, viable, practical -- or at least,
have a fair amount of skepticism in it, asking others to poke holes.

In other words, in addition to describing how a particular
implementation "fails" your arbitrary set of expectations, explain, in
great detail, how the alternative should improve what you care about and
why it should not regress what you personally don't care about.
Ultimately, have a working code that clearly demonstrates the benefits
of the alternative.

Now go back and read "What is the answer for success?" section in the
article. This is why I believe the article is just bashing, and does not
deserve an attention it gets. Speculation: the article is inflammatory
to draw attention for the sake of attention. Don't take that bait.

-Aleksey.

  

_______________________________________________ Concurrency-interest mailing list [hidden email] http://cs.oswego.edu/mailman/listinfo/concurrency-interest

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Fork/Join and Parallel Calamity? Or storm in a teacup?

Joe Bowbeer
Adding some pre-history: a FJ framework had been part of Doug's original concurrent package (1998?) that was ported/rewritten for Java 5.

FJ is discussed in chapter 4.4 of Doug Lea's CPJ book.

But it didn't make the cut for Java 5, and was not integrated into j.u.c. until Java 7, several years later, when its utility outweighed its drawbacks, and it was included so that others could use it and wouldn't have to ship their own.


Sent from Mailbox


On Fri, Apr 3, 2015 at 8:58 AM, Dr Heinz M. Kabutz <[hidden email]> wrote:

Thanks for the pointer, Aleksey - I forgot about that discussion on JavaPosse.  Even though I did not contribute to the discussion, I'm pretty sure that at some point in my life I read it.

I think the point that Ed was missing at the time was the look forward to parallel streams, which to me was the main reason for having F/J in the first place.  That said, in all my experiments, F/J has performed admirably as long as I coded it correctly.  Some simple rules like making sure that the tasks have a threshold below which they get executed sequentially.  Usual parallel guidelines that apply in other situations as well.  With the advent of parallel stream and Spliterators, I don't see much reason to use Fork/Join directly anymore.

The one thing that is difficult is to monitor what is happening inside the ForkJoinPool.  Kirk tried and found it rather challenging.
Regards

Heinz
-- 
Dr Heinz M. Kabutz (PhD CompSci)
Author of "The Java(tm) Specialists' Newsletter"
Sun/Oracle Java Champion since 2005
JavaOne Rock Star Speaker 2012
http://www.javaspecialists.eu
Tel: +30 69 75 595 262
Skype: kabutz


Aleksey Shipilev wrote:
I'd just short-cut the entire thread by referencing the Java Posse
thread from four years ago:
  https://groups.google.com/forum/#!topic/javaposse/Rfp7t23lQTo

On 04/03/2015 04:07 PM, Doug Lea wrote:
  
[A brief escape from near-quarantine for the past two and
next month dealing with our five-year dept program review
and other local stuff.]

My overall take reading this and Ed Harned's other posts is that
they don't reflect historical causality: Work-stealing frameworks
(FJ and those in other languages/platforms) were initially
targeted to classic divide-and-conquer computational workloads. But users
soon discovered that if they ignore scope disclaimers, FJ can work
well in other use cases (because of low contention, high utilization,
etc). Rather than telling people to stop doing this, we've
continually evolved internals from "can work well" to
"typically work well", across an increasing range of use cases
(which keep expanding). I think that Ed's argument is that
we should not have done this,  but instead provided some other
different Executor framework, or left it for third-party providers.
    
I think that sums it up nicely.

  
 I'd be interested to hear your opinion on this writing though.
    
Way too often I find people having a pipe dream of "having something
better", before they actually start to build, support and maintain the
better thing. Only to realize, in way too many cases, the status quo is
actually (close to) Pareto efficient, and improving something is much
less obvious than you imagined. (God, I sound old!).

Now, it's completely understandable not everyone has cycles to get their
hands dirty with development. Suggestions and constructive critique are
always welcome. However, when you are suggesting something, you'd better
make sure your suggestion is doable, viable, practical -- or at least,
have a fair amount of skepticism in it, asking others to poke holes.

In other words, in addition to describing how a particular
implementation "fails" your arbitrary set of expectations, explain, in
great detail, how the alternative should improve what you care about and
why it should not regress what you personally don't care about.
Ultimately, have a working code that clearly demonstrates the benefits
of the alternative.

Now go back and read "What is the answer for success?" section in the
article. This is why I believe the article is just bashing, and does not
deserve an attention it gets. Speculation: the article is inflammatory
to draw attention for the sake of attention. Don't take that bait.

-Aleksey.

  

_______________________________________________ Concurrency-interest mailing list [hidden email] http://cs.oswego.edu/mailman/listinfo/concurrency-interest


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Fork/Join and Parallel Calamity? Or storm in a teacup?

Viktor Klang
In reply to this post by Edward Harned-2


On Fri, Apr 3, 2015 at 4:32 PM, Edward Harned <[hidden email]> wrote:
I do not have a product to flog. In 2010 I submitted a proof-of-concept to the good professor showing that scatter-gather works just as well as what he was proposing for Java7. Since he ignored the proof, I took the parallel engine out of a Task Parallel product (open-source) I maintain and put in the Data Parallel engine. This product is also open-source. It is not suitable for an API since it is a full feature Data Parallel product.

I have never bashed the JDK. Certain features do not belong in the core product. A multi/threading/tasking framework belongs outside the core product. That is all I have said.

What's the argument(s) for that claim?
Should there be a HttpClient? CORBA? ScriptEngine? Who decides what's core and what's not?
 

The points I made in all three articles are accurate.

ed

On Fri, Apr 3, 2015 at 8:13 AM, Dr Heinz M. Kabutz <[hidden email]> wrote:
Hi fellow concurrency interest members!

A friend forwarded me this article:

http://coopsoft.com/ar/Calamity2Article.html

I'm always wary of those who have a product to flog and who bash the JDK.  I'd be interested to hear your opinion on this writing though.

Regards

Heinz
--
Dr Heinz M. Kabutz (PhD CompSci)
Author of "The Java(tm) Specialists' Newsletter"
Sun/Oracle Java Champion since 2005
JavaOne Rock Star Speaker 2012
http://www.javaspecialists.eu
Tel: +30 69 75 595 262
Skype: kabutz

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest




--
Cheers,

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Fork/Join and Parallel Calamity? Or storm in a teacup?

Andrew Haley
On 07/04/15 09:45, Viktor Klang wrote:
> On Fri, Apr 3, 2015 at 4:32 PM, Edward Harned <[hidden email]> wrote:
>
>> I have never bashed the JDK. Certain features do not belong in the
>> core product. A multi/threading/tasking framework belongs outside
>> the core product. That is all I have said.
>
> What's the argument(s) for that claim?
> Should there be a HttpClient? CORBA? ScriptEngine? Who decides what's core
> and what's not?

I was rather hoping that we wouldn't have to worry about that sort of
thing any more because we now have a modular JDK.  Having said that,
j.u.c seems to be in java.base.

The JDK has always been a rather maximal everything-including-the-
kitchen-sink lump, and IME people seem rather to like it like that.
If some JDK packages are going to use fork/join, then fork/join is
gong to be in the core JDK.

I suppose the problem -- if there is one -- is that any frameworks
which are part of the JDK get an automatic legitimacy which may lead
to them being used even if something else would be more appropriate.
I would tend to do that because of a fear that software outside the
JDK might be abandoned or not well maintained.

Andrew.
_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Fork/Join and Parallel Calamity? Or storm in a teacup?

Edward Harned-2
Just to clear the air:

The article written in 2010 was about why the F/J framework should not be in core Java.

The part two article cited here was written in 2013 and is about how the framework, when used as a parallel engine for Java8, is fatally flawed.

The third part of the series, written in 2015, is about how Java8u40 did not resolve the severe performance problems.

ed

On Tue, Apr 7, 2015 at 5:29 AM, Andrew Haley <[hidden email]> wrote:
On 07/04/15 09:45, Viktor Klang wrote:
> On Fri, Apr 3, 2015 at 4:32 PM, Edward Harned <[hidden email]> wrote:
>
>> I have never bashed the JDK. Certain features do not belong in the
>> core product. A multi/threading/tasking framework belongs outside
>> the core product. That is all I have said.
>
> What's the argument(s) for that claim?
> Should there be a HttpClient? CORBA? ScriptEngine? Who decides what's core
> and what's not?

I was rather hoping that we wouldn't have to worry about that sort of
thing any more because we now have a modular JDK.  Having said that,
j.u.c seems to be in java.base.

The JDK has always been a rather maximal everything-including-the-
kitchen-sink lump, and IME people seem rather to like it like that.
If some JDK packages are going to use fork/join, then fork/join is
gong to be in the core JDK.

I suppose the problem -- if there is one -- is that any frameworks
which are part of the JDK get an automatic legitimacy which may lead
to them being used even if something else would be more appropriate.
I would tend to do that because of a fear that software outside the
JDK might be abandoned or not well maintained.

Andrew.


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Fork/Join and Parallel Calamity? Or storm in a teacup?

Andrew Haley
In reply to this post by Aleksey Shipilev-2
On 03/04/15 15:24, Aleksey Shipilev wrote:

> Way too often I find people having a pipe dream of "having something better", before they actually start to build, support and maintain the better thing. Only to realize, in way too many cases, the status quo is actually (close to) Pareto efficient, and improving something is much less obvious than you imagined. (God, I sound old!).
>
> Now, it's completely understandable not everyone has cycles to get their hands dirty with development. Suggestions and constructive critique are always welcome. However, when you are suggesting something, you'd better make sure your suggestion is doable, viable, practical -- or at least, have a fair amount of skepticism in it, asking others to poke holes.

Right.  I guess there's nothing to prevent anyone from creating a more
performant parallel stream and demonstrating it.  After all, writing
long technical articles takes ages, and there's no point without
working code.

Andrew.

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Fork/Join and Parallel Calamity? Or storm in a teacup?

Edward Harned-2
In reply to this post by Aleksey Shipilev-2
I don’t do systems programming or API development so I don’t have a comparable F/J framework. I do full feature application services. If you would like to see a full-feature scatter-gather Data Parallel framework that is “working code that clearly demonstrates the benefits of the alternative” then have a look here:
http://sourceforge.net/projects/tymeacdse

Using those principles/structures is what I meant by “What is the answer for success”

ed

On Fri, Apr 3, 2015 at 10:24 AM, Aleksey Shipilev <[hidden email]> wrote:
I'd just short-cut the entire thread by referencing the Java Posse
thread from four years ago:
  https://groups.google.com/forum/#!topic/javaposse/Rfp7t23lQTo

On 04/03/2015 04:07 PM, Doug Lea wrote:
> [A brief escape from near-quarantine for the past two and
> next month dealing with our five-year dept program review
> and other local stuff.]
>
> My overall take reading this and Ed Harned's other posts is that
> they don't reflect historical causality: Work-stealing frameworks
> (FJ and those in other languages/platforms) were initially
> targeted to classic divide-and-conquer computational workloads. But users
> soon discovered that if they ignore scope disclaimers, FJ can work
> well in other use cases (because of low contention, high utilization,
> etc). Rather than telling people to stop doing this, we've
> continually evolved internals from "can work well" to
> "typically work well", across an increasing range of use cases
> (which keep expanding). I think that Ed's argument is that
> we should not have done this,  but instead provided some other
> different Executor framework, or left it for third-party providers.

I think that sums it up nicely.

>  I'd be interested to hear your opinion on this writing though.

Way too often I find people having a pipe dream of "having something
better", before they actually start to build, support and maintain the
better thing. Only to realize, in way too many cases, the status quo is
actually (close to) Pareto efficient, and improving something is much
less obvious than you imagined. (God, I sound old!).

Now, it's completely understandable not everyone has cycles to get their
hands dirty with development. Suggestions and constructive critique are
always welcome. However, when you are suggesting something, you'd better
make sure your suggestion is doable, viable, practical -- or at least,
have a fair amount of skepticism in it, asking others to poke holes.

In other words, in addition to describing how a particular
implementation "fails" your arbitrary set of expectations, explain, in
great detail, how the alternative should improve what you care about and
why it should not regress what you personally don't care about.
Ultimately, have a working code that clearly demonstrates the benefits
of the alternative.

Now go back and read "What is the answer for success?" section in the
article. This is why I believe the article is just bashing, and does not
deserve an attention it gets. Speculation: the article is inflammatory
to draw attention for the sake of attention. Don't take that bait.

-Aleksey.


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest



_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest