Re: Concurrency-interest Digest, Vol 167, Issue 11

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

Re: Concurrency-interest Digest, Vol 167, Issue 11

JSR166 Concurrency mailing list
[Doug] > I am surprised that you did not compare LongAdder, that uses CAS failures to guide contention-spreading that is usually much more effective than backoff. On the other hand it cannot be used when you need the atomic value of the "get".

Exactly, I was only comparing "atomic" API, and LongAdder while being fast can only be used for counting and not for concurrency control.

[Franz],
Thank you for the more detailed explanation!

Valentin

On Mon, 28 Jan 2019 at 05:39, <[hidden email]> wrote:
Send Concurrency-interest mailing list submissions to
        [hidden email]

To subscribe or unsubscribe via the World Wide Web, visit
        http://cs.oswego.edu/mailman/listinfo/concurrency-interest
or, via email, send a message with subject or body 'help' to
        [hidden email]

You can reach the person managing the list at
        [hidden email]

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Concurrency-interest digest..."


Today's Topics:

   1. Re: getAndIncrement with backoff (Andrew Haley)
   2. Re: getAndIncrement with backoff (Andrew Haley)
   3. Re: Concurrency-interest Digest, Vol 167, Issue 10
      (Valentin Kovalenko)
   4. getAndIncrement with backoff (Valentin Kovalenko)
   5. Re: getAndIncrement with backoff (Doug Lea)
   6. Re: getAndIncrement with backoff (Francesco Nigro)


----------------------------------------------------------------------

Message: 1
Date: Mon, 28 Jan 2019 11:33:34 +0000
From: Andrew Haley <[hidden email]>
To: Francesco Nigro <[hidden email]>
Cc: Valentin Kovalenko <[hidden email]>,
        concurrency-interest <[hidden email]>
Subject: Re: [concurrency-interest] getAndIncrement with backoff
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=utf-8

On 1/28/19 11:20 AM, Francesco Nigro wrote:

> correct, but my concern about latencies is actually the backoff
> strategy itself: if not chosen correctly would introduce safepoint
> polls that "could" lead to unpredictable slowdown on specific cases.

True, but that would be a really extreme case.

> Same should be said if using Thread::onSpinWait that could be backed
> by a pause instruction and recently on the center of a discussion
> about its effectiveness (bad implemented AFAIK).

I think so too. I'm waiting to see some proper working measurements on
AArch64 to convince me that it's useful.

--
Andrew Haley
Java Platform Lead Engineer
Red Hat UK Ltd. <https://www.redhat.com>
EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671


------------------------------

Message: 2
Date: Mon, 28 Jan 2019 11:37:52 +0000
From: Andrew Haley <[hidden email]>
To: Francesco Nigro <[hidden email]>
Cc: Valentin Kovalenko <[hidden email]>,
        concurrency-interest <[hidden email]>
Subject: Re: [concurrency-interest] getAndIncrement with backoff
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=utf-8

On 1/28/19 11:33 AM, Andrew Haley via Concurrency-interest wrote:

> On 1/28/19 11:20 AM, Francesco Nigro wrote:
>
>> correct, but my concern about latencies is actually the backoff
>> strategy itself: if not chosen correctly would introduce safepoint
>> polls that "could" lead to unpredictable slowdown on specific cases.
>
> True, but that would be a really extreme case.
>
>> Same should be said if using Thread::onSpinWait that could be backed
>> by a pause instruction and recently on the center of a discussion
>> about its effectiveness (bad implemented AFAIK).
>
> I think so too. I'm waiting to see some proper working measurements on
> AArch64 to convince me that it's useful.

e.g. http://mail.openjdk.java.net/pipermail/aarch64-port-dev/2017-August/004870.html

--
Andrew Haley
Java Platform Lead Engineer
Red Hat UK Ltd. <https://www.redhat.com>
EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671


------------------------------

Message: 3
Date: Mon, 28 Jan 2019 04:43:42 -0700
From: Valentin Kovalenko <[hidden email]>
To: concurrency-interest <[hidden email]>
Subject: Re: [concurrency-interest] Concurrency-interest Digest, Vol
        167,    Issue 10
Message-ID:
        <[hidden email]>
Content-Type: text/plain; charset="utf-8"

> there isn’t a universal backoff strategy that suits all platforms and
cases
It does not seem a problem to have different intrinsic backoff strategies
for different platforms, but this, of course, can't be done for different
cases. However, since backoff only comes to play in case of a CAS failure
(or even multiple failures), it should neither affect throughput nor
latency for a low contention scenario. But Francesco provided an
explanation of how this still can negatively affect the performance
(though, I, unfortunately, didn't understand it). Francesco, may I ask you
to explain the same thing for a less educated audience?:)

> lock:xadd defers this to the hardware. The instruction has no “redo” -
but of course internally at hardware level it probably is still a CAS-like
loop with a retry - and a backoff (I think all consensus protocols have
corner cases that can only be resolved through timing a race).
So basically this is the same idea we usually use in programming: rely on
the underlying layer's implementation if it is provided. And the underlying
layer is hardware in this case.

Valentin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20190128/f89d35ed/attachment-0001.html>

------------------------------

Message: 4
Date: Mon, 28 Jan 2019 04:46:03 -0700
From: Valentin Kovalenko <[hidden email]>
To: concurrency-interest <[hidden email]>
Subject: [concurrency-interest] getAndIncrement with backoff
Message-ID:
        <CAO-wXwKO=T8fL5BTtGw8=Ad_SqdzE31ydpaSY=[hidden email]>
Content-Type: text/plain; charset="utf-8"

> there isn’t a universal backoff strategy that suits all platforms and
cases

It does not seem a problem to have different intrinsic backoff strategies
for different platforms, but this, of course, can't be done for different
cases. However, since backoff only comes to play in case of a CAS failure
(or even multiple failures), it should neither affect throughput nor
latency for a low contention scenario. But Francesco provided an
explanation of how this still can negatively affect the performance
(though, I, unfortunately, didn't understand it). Francesco, may I ask you
to explain the same thing for a less educated audience?:)

> lock:xadd defers this to the hardware. The instruction has no “redo” -
but of course internally at hardware level it probably is still a CAS-like
loop with a retry - and a backoff (I think all consensus protocols have
corner cases that can only be resolved through timing a race).

So basically this is the same idea we usually use in programming: rely on
the underlying layer's implementation if it is provided. And the underlying
layer is hardware in this case.

Valentin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20190128/25ced3a7/attachment-0001.html>

------------------------------

Message: 5
Date: Mon, 28 Jan 2019 06:52:41 -0500
From: Doug Lea <[hidden email]>
To: [hidden email]
Subject: Re: [concurrency-interest] getAndIncrement with backoff
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=utf-8

On 1/28/19 2:29 AM, Valentin Kovalenko via Concurrency-interest wrote:

> It seems to be common knowledge (see "Lightweight Contention Management
> for E cient Compare-and-Swap Operations",
> https://arxiv.org/pdf/1305.5800.pdf) that simple exponential backoff
> drastically increases the throughput of CAS-based operations (e.g.
> getAndIncrement). I checked this on my own, and yes, this is still true
> for OpenJDK 11:

I am surprised that you did not compare LongAdder, that uses CAS
failures to guide contention-spreading that is usually much more
effective than backoff. On the other hand it cannot be used when you
need the atomic value of the "get".

-Doug



------------------------------

Message: 6
Date: Mon, 28 Jan 2019 13:33:53 +0100
From: Francesco Nigro <[hidden email]>
To: concurrency-interest <[hidden email]>
Subject: Re: [concurrency-interest] getAndIncrement with backoff
Message-ID:
        <[hidden email]>
Content-Type: text/plain; charset="utf-8"

*@velentin*

> But Francesco provided an
explanation of how this still can negatively affect the performance
(though, I, unfortunately, didn't understand it). Francesco, may I ask you
to explain the same thing for a less educated audience?:)

Sorry: i've spoken without context :P
The long story short is that there is an implementation detail (but
sort-of-defined in the OpenJDK glossary) of the JVM called "safepoint":
these are safe states
in which the JVM can perform specific operations/optimizations relying on
the fact that during these intervals the mutator threads ie Java threads
cannot break
specific invariants that allows to perform such operations.
The mechanism that allow to reach a safepoint (global or
individual/per-Java-Thread, given that recent changes on JDK 10 has
introduced the notion of thread-.ocal handshake on
https://bugs.openjdk.java.net/browse/JDK-8185640), is by adding safepoint
polls (using the LinuxPefAsmProfiler of JMH show them as {poll} or
{poll_return}
in the annotated ASM) among the compiled code instructions (the bytecode
too, not just ASM).
Such polls, when reached AND a local/global safepoint is needed, will lead
to issue a SEGV (aka segmentation fault) that, when handled, allows the JVM
to start the local/global safepoint.

If a backoff strategy contains a safepoint poll the risk lie when a global
safepoint is being requested; it will wait untill all the mutator threads
will reach it AND the safepoint operation(s) will finish,
leading to latencies outliers (or maybe just spikes, given that you could
choose  -XX:GuaranteedSafepointInterval too).

Doing the same thing in C should be equivalent, considering the most
concurrent primitives are intrinsics (and won't contains such polls AFAIK)
and the resulting ASM should be the same:
the reality is that if you write your own code you can't be sure that polls
are not added, unless you enjoy reading both the JVM source code and ASM
from JMH :P)

AS *@andrew* as already written (and he for sure he can correct any
imprecision about safepoint/safepoint polls I've written) is not something
that a user should care about, but if you're seeking
the best sustainable throughput (or just predictable latencies) is
something that IMHO you should consider at least: then you can ignore it
right after :P

*@doug*
LongAdder rocks +100

Cheers,
Franz (Francesco is longer and most people prefer call me with this :))

Il giorno lun 28 gen 2019 alle ore 12:53 Doug Lea via Concurrency-interest <
[hidden email]> ha scritto:

> On 1/28/19 2:29 AM, Valentin Kovalenko via Concurrency-interest wrote:
>
> > It seems to be common knowledge (see "Lightweight Contention Management
> > for E cient Compare-and-Swap Operations",
> > https://arxiv.org/pdf/1305.5800.pdf) that simple exponential backoff
> > drastically increases the throughput of CAS-based operations (e.g.
> > getAndIncrement). I checked this on my own, and yes, this is still true
> > for OpenJDK 11:
>
> I am surprised that you did not compare LongAdder, that uses CAS
> failures to guide contention-spreading that is usually much more
> effective than backoff. On the other hand it cannot be used when you
> need the atomic value of the "get".
>
> -Doug
>
> _______________________________________________
> Concurrency-interest mailing list
> [hidden email]
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20190128/7767cc68/attachment.html>

------------------------------

Subject: Digest Footer

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest


------------------------------

End of Concurrency-interest Digest, Vol 167, Issue 11
*****************************************************

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest