A little lock free algorithm [the code]

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

A little lock free algorithm [the code]

studdugie
public class TFailure
{
    volatile boolean dead;
    volatile long expiration =
        System.currentTimeMillis() + BACKOFF_INCREMENT;
    private static final long BACKOFF_MAX = ...;
    private static final long BACKOFF_INCREMENT = ...;
    private volatile long backoff = BACKOFF_INCREMENT;
    private final AtomicBoolean locked = new AtomicBoolean();

    /**
     * Increases the expiration timeout for the host if and only if
     * it's not already dead and the current expiration time has
      * elapsed.
     */
    void increment()
    {
        long millis;
        if( dead || (millis = System.currentTimeMillis()) < expiration )
            return;

        if( locked.compareAndSet( false, true ) )
        {
            backoff += BACKOFF_INCREMENT;
            if( backoff > BACKOFF_MAX )
                dead = true;
            else
                expiration = millis + backoff;
            locked.set( false );
        }
    }
}

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

RE: A little lock free algorithm [the code]

David Holmes-3
Okay this is basically a tryLock style of approach: if you can't get the
lock you know someone else is doing the increment. You've built your own
tryLock out of an AtomicBoolean.

If backoff is only read and written when you've set the atomic boolean then
you can drop the volatile for it as it "piggybacks" on the AtomicBoolean
memory semantics.

Cheers,
David Holmes

> -----Original Message-----
> From: [hidden email]
> [mailto:[hidden email]]On Behalf Of
> studdugie
> Sent: Friday, 31 March 2006 12:13 PM
> To: [hidden email]
> Subject: [concurrency-interest] A little lock free algorithm [the code]
>
>
> public class TFailure
> {
>     volatile boolean dead;
>     volatile long expiration =
>         System.currentTimeMillis() + BACKOFF_INCREMENT;
>     private static final long BACKOFF_MAX = ...;
>     private static final long BACKOFF_INCREMENT = ...;
>     private volatile long backoff = BACKOFF_INCREMENT;
>     private final AtomicBoolean locked = new AtomicBoolean();
>
>     /**
>      * Increases the expiration timeout for the host if and only if
>      * it's not already dead and the current expiration time has
>       * elapsed.
>      */
>     void increment()
>     {
>         long millis;
>         if( dead || (millis = System.currentTimeMillis()) < expiration )
>             return;
>
>         if( locked.compareAndSet( false, true ) )
>         {
>             backoff += BACKOFF_INCREMENT;
>             if( backoff > BACKOFF_MAX )
>                 dead = true;
>             else
>                 expiration = millis + backoff;
>             locked.set( false );
>         }
>     }
> }
>
> _______________________________________________
> Concurrency-interest mailing list
> [hidden email]
> http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: A little lock free algorithm [the code]

studdugie
But what about it (backoff) getting cached?

On 3/30/06, David Holmes <[hidden email]> wrote:

> Okay this is basically a tryLock style of approach: if you can't get the
> lock you know someone else is doing the increment. You've built your own
> tryLock out of an AtomicBoolean.
>
> If backoff is only read and written when you've set the atomic boolean then
> you can drop the volatile for it as it "piggybacks" on the AtomicBoolean
> memory semantics.
>
> Cheers,
> David Holmes
>
> > -----Original Message-----
> > From: [hidden email]
> > [mailto:[hidden email]]On Behalf Of
> > studdugie
> > Sent: Friday, 31 March 2006 12:13 PM
> > To: [hidden email]
> > Subject: [concurrency-interest] A little lock free algorithm [the code]
> >
> >
> > public class TFailure
> > {
> >     volatile boolean dead;
> >     volatile long expiration =
> >         System.currentTimeMillis() + BACKOFF_INCREMENT;
> >     private static final long BACKOFF_MAX = ...;
> >     private static final long BACKOFF_INCREMENT = ...;
> >     private volatile long backoff = BACKOFF_INCREMENT;
> >     private final AtomicBoolean locked = new AtomicBoolean();
> >
> >     /**
> >      * Increases the expiration timeout for the host if and only if
> >      * it's not already dead and the current expiration time has
> >       * elapsed.
> >      */
> >     void increment()
> >     {
> >         long millis;
> >         if( dead || (millis = System.currentTimeMillis()) < expiration )
> >             return;
> >
> >         if( locked.compareAndSet( false, true ) )
> >         {
> >             backoff += BACKOFF_INCREMENT;
> >             if( backoff > BACKOFF_MAX )
> >                 dead = true;
> >             else
> >                 expiration = millis + backoff;
> >             locked.set( false );
> >         }
> >     }
> > }
> >
> > _______________________________________________
> > Concurrency-interest mailing list
> > [hidden email]
> > http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest
>
>

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

RE: A little lock free algorithm [the code]

David Holmes-3
The use of backoff only occurs after a successful setting of the
AtomicBoolean. The memory semantics for Atomics are like volatiles. This is
what I meant by "piggybacking".

If thread A sets locked to true, updates backoff and sets locked to false
then:

  locked=true happens-before backoff=x
  backoff=x happens-before locked=false

If thread B successfully sets locked to true then:

  ThreadA:locked=false happens-before ThreadB:locked=true

and by transitivity the setting of backoff in Thread A happens-before its
use in Thread B.

Hence no need for backoff to be volatile *iff* only read/written while
locked==true.

Consider if you had use a synchronized block instead (or an explicit lock),
you would not make backoff volatile if it is only accessed while the lock is
held.

Cheers,
David Holmes

> -----Original Message-----
> From: studdugie [mailto:[hidden email]]
> Sent: Friday, 31 March 2006 12:31 PM
> To: [hidden email]
> Cc: [hidden email]
> Subject: Re: [concurrency-interest] A little lock free algorithm [the
> code]
>
>
> But what about it (backoff) getting cached?
>
> On 3/30/06, David Holmes <[hidden email]> wrote:
> > Okay this is basically a tryLock style of approach: if you can't get the
> > lock you know someone else is doing the increment. You've built your own
> > tryLock out of an AtomicBoolean.
> >
> > If backoff is only read and written when you've set the atomic
> boolean then
> > you can drop the volatile for it as it "piggybacks" on the AtomicBoolean
> > memory semantics.
> >
> > Cheers,
> > David Holmes
> >
> > > -----Original Message-----
> > > From: [hidden email]
> > > [mailto:[hidden email]]On Behalf Of
> > > studdugie
> > > Sent: Friday, 31 March 2006 12:13 PM
> > > To: [hidden email]
> > > Subject: [concurrency-interest] A little lock free algorithm
> [the code]
> > >
> > >
> > > public class TFailure
> > > {
> > >     volatile boolean dead;
> > >     volatile long expiration =
> > >         System.currentTimeMillis() + BACKOFF_INCREMENT;
> > >     private static final long BACKOFF_MAX = ...;
> > >     private static final long BACKOFF_INCREMENT = ...;
> > >     private volatile long backoff = BACKOFF_INCREMENT;
> > >     private final AtomicBoolean locked = new AtomicBoolean();
> > >
> > >     /**
> > >      * Increases the expiration timeout for the host if and only if
> > >      * it's not already dead and the current expiration time has
> > >       * elapsed.
> > >      */
> > >     void increment()
> > >     {
> > >         long millis;
> > >         if( dead || (millis = System.currentTimeMillis()) <
> expiration )
> > >             return;
> > >
> > >         if( locked.compareAndSet( false, true ) )
> > >         {
> > >             backoff += BACKOFF_INCREMENT;
> > >             if( backoff > BACKOFF_MAX )
> > >                 dead = true;
> > >             else
> > >                 expiration = millis + backoff;
> > >             locked.set( false );
> > >         }
> > >     }
> > > }
> > >
> > > _______________________________________________
> > > Concurrency-interest mailing list
> > > [hidden email]
> > > http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest
> >
> >
>

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: A little lock free algorithm [the code]

studdugie
Now it could be that I'm just slow in the head so bear with me because
I'm still stuck on the caching issue. Here is what I'm stuck on.

Lets continue with your example and restrict ourselves to the
assumption that there are only 2 threads that will ever set locked to
true, A and B.  So what happens on the third go round after A and B
have updated backoff and A is back at it again? My understanding of
the visibility rules say that if backoff is not volatile then A can
cache the value it computed in the first go round and reuse it as the
basis for computation in the third. In other words, A never has to
re-read backoff, thus it may never see B's update.

Now if I'm wrong about this it means that my understanding of the
visibility rules is flawed and the rules have changed. So let me ask
you this. Are you saying the mixing of volatile and non-volatile
fields causes the non-volatile fields to behave like volatiles (no
caching)?

Sincerely,

Dane

On 3/30/06, David Holmes <[hidden email]> wrote:

> The use of backoff only occurs after a successful setting of the
> AtomicBoolean. The memory semantics for Atomics are like volatiles. This is
> what I meant by "piggybacking".
>
> If thread A sets locked to true, updates backoff and sets locked to false
> then:
>
>   locked=true happens-before backoff=x
>   backoff=x happens-before locked=false
>
> If thread B successfully sets locked to true then:
>
>   ThreadA:locked=false happens-before ThreadB:locked=true
>
> and by transitivity the setting of backoff in Thread A happens-before its
> use in Thread B.
>
> Hence no need for backoff to be volatile *iff* only read/written while
> locked==true.
>
> Consider if you had use a synchronized block instead (or an explicit lock),
> you would not make backoff volatile if it is only accessed while the lock is
> held.
>
> Cheers,
> David Holmes
> > -----Original Message-----
> > From: studdugie [mailto:[hidden email]]
> > Sent: Friday, 31 March 2006 12:31 PM
> > To: [hidden email]
> > Cc: [hidden email]
> > Subject: Re: [concurrency-interest] A little lock free algorithm [the
> > code]
> >
> >
> > But what about it (backoff) getting cached?
> >
> > On 3/30/06, David Holmes <[hidden email]> wrote:
> > > Okay this is basically a tryLock style of approach: if you can't get the
> > > lock you know someone else is doing the increment. You've built your own
> > > tryLock out of an AtomicBoolean.
> > >
> > > If backoff is only read and written when you've set the atomic
> > boolean then
> > > you can drop the volatile for it as it "piggybacks" on the AtomicBoolean
> > > memory semantics.
> > >
> > > Cheers,
> > > David Holmes
> > >
> > > > -----Original Message-----
> > > > From: [hidden email]
> > > > [mailto:[hidden email]]On Behalf Of
> > > > studdugie
> > > > Sent: Friday, 31 March 2006 12:13 PM
> > > > To: [hidden email]
> > > > Subject: [concurrency-interest] A little lock free algorithm
> > [the code]
> > > >
> > > >
> > > > public class TFailure
> > > > {
> > > >     volatile boolean dead;
> > > >     volatile long expiration =
> > > >         System.currentTimeMillis() + BACKOFF_INCREMENT;
> > > >     private static final long BACKOFF_MAX = ...;
> > > >     private static final long BACKOFF_INCREMENT = ...;
> > > >     private volatile long backoff = BACKOFF_INCREMENT;
> > > >     private final AtomicBoolean locked = new AtomicBoolean();
> > > >
> > > >     /**
> > > >      * Increases the expiration timeout for the host if and only if
> > > >      * it's not already dead and the current expiration time has
> > > >       * elapsed.
> > > >      */
> > > >     void increment()
> > > >     {
> > > >         long millis;
> > > >         if( dead || (millis = System.currentTimeMillis()) <
> > expiration )
> > > >             return;
> > > >
> > > >         if( locked.compareAndSet( false, true ) )
> > > >         {
> > > >             backoff += BACKOFF_INCREMENT;
> > > >             if( backoff > BACKOFF_MAX )
> > > >                 dead = true;
> > > >             else
> > > >                 expiration = millis + backoff;
> > > >             locked.set( false );
> > > >         }
> > > >     }
> > > > }
> > > >
> > > > _______________________________________________
> > > > Concurrency-interest mailing list
> > > > [hidden email]
> > > > http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest
> > >
> > >
> >
>
>

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

RE: A little lock free algorithm [the code]

David Holmes-3
> Now if I'm wrong about this it means that my understanding of the
> visibility rules is flawed and the rules have changed. So let me ask
> you this. Are you saying the mixing of volatile and non-volatile
> fields causes the non-volatile fields to behave like volatiles (no
> caching)?

To a certain extent yes. volatile reads and writes, together with use of
Atomics, plus locks, all establish "happens-before" relationships. Within a
single thread each action that comes before another action in program-order,
happens-before that second action. If a write of a variable happens-before a
subsequent read of that variable then the read is guaranteed to see the
value that was written. When considering actions across threads then there
are basically no guarantees about what values will be read by thread A after
a write by thread B unless the read and write are ordered by a
happens-before relationship.

The happens-before relationship is transitive: if a happens-before b, and b
happens-before c, then a happens-before c.

So in your example every CAS that manages to set locked to true has to
happen-before a preceding set of locked=false in another thread.
Consequently any write to  backoff that happens-before locked=false, also
happens-before the other thread sets locked=true. As the write of backoff
and the read of backoff are ordered by a happens-before relationship, then
the read is guaranteed to see the value that was previously written by the
other thread.

In the old memory model volatiles only had memory synchronization effects
with respect to other volatiles - which meant for most intents and purposes
if a volatile flag protected some data then the data also had to be
volatile. The new memory model defines the happens-before relationship which
is more general, and so non-volatile variables can acquire visibility
guarantees by "piggybacking" on the memory synchronization effects of
volatiles and Atomics. This is the same "piggybacking" that has always been
true for use of synchronized - you didn't have to make the data protected by
a monitor, volatile, as use of the monitor ensures the visibility of the
data.

Hope that clarifies things.

Cheers,
David Holmes

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: A little lock free algorithm [the code]

studdugie
Crystal clear, thanx to you.  I love that psychological "click" that
happens when something you've been stuck on finally make sense.

Thanx a million.

Sincerely,

Dane
On 3/31/06, David Holmes <[hidden email]> wrote:

> > Now if I'm wrong about this it means that my understanding of the
> > visibility rules is flawed and the rules have changed. So let me ask
> > you this. Are you saying the mixing of volatile and non-volatile
> > fields causes the non-volatile fields to behave like volatiles (no
> > caching)?
>
> To a certain extent yes. volatile reads and writes, together with use of
> Atomics, plus locks, all establish "happens-before" relationships. Within a
> single thread each action that comes before another action in program-order,
> happens-before that second action. If a write of a variable happens-before a
> subsequent read of that variable then the read is guaranteed to see the
> value that was written. When considering actions across threads then there
> are basically no guarantees about what values will be read by thread A after
> a write by thread B unless the read and write are ordered by a
> happens-before relationship.
>
> The happens-before relationship is transitive: if a happens-before b, and b
> happens-before c, then a happens-before c.
>
> So in your example every CAS that manages to set locked to true has to
> happen-before a preceding set of locked=false in another thread.
> Consequently any write to  backoff that happens-before locked=false, also
> happens-before the other thread sets locked=true. As the write of backoff
> and the read of backoff are ordered by a happens-before relationship, then
> the read is guaranteed to see the value that was previously written by the
> other thread.
>
> In the old memory model volatiles only had memory synchronization effects
> with respect to other volatiles - which meant for most intents and purposes
> if a volatile flag protected some data then the data also had to be
> volatile. The new memory model defines the happens-before relationship which
> is more general, and so non-volatile variables can acquire visibility
> guarantees by "piggybacking" on the memory synchronization effects of
> volatiles and Atomics. This is the same "piggybacking" that has always been
> true for use of synchronized - you didn't have to make the data protected by
> a monitor, volatile, as use of the monitor ensures the visibility of the
> data.
>
> Hope that clarifies things.
>
> Cheers,
> David Holmes
>
>

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: A little lock free algorithm [the code]

Dawid Kurzyniec
In reply to this post by studdugie
I think that unless you put the test "now < expiration" inside the
"lock", there is a potential race condition:

1. Thread A determines that expiration passed, succeeds at compareAndSet.
2. Thread B determines that expiration passed
3. Thread A updates backoff and expiration, unlocks, and returns
4. Thread B succeeds at compareAndSet, and thus also updates backoff and
expiration

Regards,
Dawid

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: A little lock free algorithm [the code]

studdugie
You are quite right about the race. It's been nagging at me since I
first wrote it. It's the reason why I wanted to post the code here in
the first place because after testing the code and not seeing the race
happen I wasn't so sure anymore.So I wanted to see if anyone else
thought it was a problem besides me and whatever else they might find.

As I write this, it just dawned on my why the race didn't show up in
testing. I'm running Gentoo Linux w/ a custom kernel w/ preemption
turned of.  In the scenario you've described the race requires some
mechansim to stall Thread B long enough for A to acquire and release
"locked" and in my test environment there simply isn't enough going on
in other parts of the system to stall Thread B's execution and w/o
preemption that makes it more so.

Thanks for confirming my suspicions.

Cheers,

Dane

On 3/31/06, Dawid Kurzyniec <[hidden email]> wrote:

> I think that unless you put the test "now < expiration" inside the
> "lock", there is a potential race condition:
>
> 1. Thread A determines that expiration passed, succeeds at compareAndSet.
> 2. Thread B determines that expiration passed
> 3. Thread A updates backoff and expiration, unlocks, and returns
> 4. Thread B succeeds at compareAndSet, and thus also updates backoff and
> expiration
>
> Regards,
> Dawid
>
>

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

RE: A little lock free algorithm [the code]

David Holmes-3
> studdugie writes:
>
> You are quite right about the race. It's been nagging at me since I
> first wrote it. It's the reason why I wanted to post the code here in
> the first place because after testing the code and not seeing the race
> happen I wasn't so sure anymore.So I wanted to see if anyone else
> thought it was a problem besides me and whatever else they might find.

Sorry about that. I was focussing on the visibility issues and overlooked
the details of the semantic requirements.

If you want the execution of the if block to only be performed by one thread
per "time window" then the test of the time has to be atomic with respect to
updating expiration. Simplest way is to re-do the test when you have the
"lock":

    void increment()
    {
        long millis;
        // quick check if nothing to do
        if( dead || (millis = System.currentTimeMillis()) < expiration )
            return;

        if( locked.compareAndSet( false, true ) )
        {
            // check again as things might have changed since the first
check
            if( dead || millis < expiration )
              return;

            backoff += BACKOFF_INCREMENT;
            if( backoff > BACKOFF_MAX )
                dead = true;
            else
                expiration = millis + backoff;
            locked.set( false );
        }
    }

This is still approximate as you really need to snapshot the current time as
close as possible to the detection of the failure - otherwise you might get
preempted and get a distorted view of when the failure actually happened. Of
course there is never a guarantee as you can be preempted at any time.

Looking closer I'm not clear on exactly how you want this to work. When and
how are TFailure objects created? I'm wondering how long might elapse
between the initialization of "expiration" and the first call to increment.
I suspect your logic is something like:

if connect_to_host_failed then
   if host.tfailure == null
       host.tfailure = new Tfailure()
   else
      host.tfailure.increment()

in which case how do you make the construction/publication thread-safe?

Also your expiration logic would seem to lead to a host being declared dead
after a certain number of failures, regardless of successes in between the
failures. Eventually backoff will be greater than BACKOFF_MAX because it is
never reduced. I also don't understand why you want the failure window to
get larger and larger.

Cheers,
David Holmes

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: A little lock free algorithm [the code]

studdugie
On 3/31/06, David Holmes <[hidden email]> wrote:

> Sorry about that. I was focussing on the visibility issues and overlooked
> the details of the semantic requirements.
>
> If you want the execution of the if block to only be performed by one thread
> per "time window" then the test of the time has to be atomic with respect to
> updating expiration. Simplest way is to re-do the test when you have the
> "lock":
>
>     void increment()
>     {
>         long millis;
>         // quick check if nothing to do
>         if( dead || (millis = System.currentTimeMillis()) < expiration )
>             return;
>
>         if( locked.compareAndSet( false, true ) )
>         {
>             // check again as things might have changed since the first
> check
>             if( dead || millis < expiration )
>               return;
>
>             backoff += BACKOFF_INCREMENT;
>             if( backoff > BACKOFF_MAX )
>                 dead = true;
>             else
>                 expiration = millis + backoff;
>             locked.set( false );
>         }
>     }
>
> This is still approximate as you really need to snapshot the current time as
> close as possible to the detection of the failure - otherwise you might get
> preempted and get a distorted view of when the failure actually happened. Of
> course there is never a guarantee as you can be preempted at any time.

Truth be told, I don't need the timestamp to be absolutely right I
just need it to be close enough and given that we are talking about
minutes, hours, and even days backoffs a thread would have to be
preempted for a very long time for the timestamp to be so skewed as to
be useless.

> Looking closer I'm not clear on exactly how you want this to work. When and
> how are TFailure objects created? I'm wondering how long might elapse
> between the initialization of "expiration" and the first call to increment.
> I suspect your logic is something like:
>
> if connect_to_host_failed then
>    if host.tfailure == null
>        host.tfailure = new Tfailure()
>    else
>       host.tfailure.increment()
>
> in which case how do you make the construction/publication thread-safe?
I use ConcurrentMap's putIfAbsent(K,V) method to make publication
thread-safe. So we end up w/:
TFailure host = failures.get( ip );
if( null == host )
    failures.putIfAbsent( ip, new TFailure() );
else
    host.increment();

> Also your expiration logic would seem to lead to a host being declared dead
> after a certain number of failures, regardless of successes in between the
> failures. Eventually backoff will be greater than BACKOFF_MAX because it is
> never reduced. I also don't understand why you want the failure window to
> get larger and larger.
Not quite. For every successful connection made there is a call to
"failures" to remove any entry for the successful "ip". So basically
whenever a connection gets made the host gets a new lease on life.
What can I say? I'm not a vengeful god. I'm all about redemption.

Cheers,

Dane

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest