Safe publishing strategy

classic Classic list List threaded Threaded
59 messages Options
123
Reply | Threaded
Open this post in threaded view
|

Safe publishing strategy

Luke Sandberg
Guava has this method:

Futures.withFallback

which is implemented via a delegating future:

private static class FallbackFuture<V> extends AbstractFuture.TrustedFuture<V> {
    ListenableFuture<? extends V> input;

    FallbackFuture(ListenableFuture<? extends V> input,
        final FutureFallback<? extends V> fallback,
        final Executor executor) {
      input = input;
      /// a bunch of stuff with listeners
    }

  @Override
    public boolean cancel(boolean mayInterruptIfRunning) {
      ListenableFuture<?> local = this.input;
      if (super.cancel(mayInterruptIfRunning)) {
        if (local != null) {
          local.cancel(mayInterruptIfRunning);
        }
        return true;
      }
      return false;
   }
}

This future does a lot of stuff to handle recovering from failure of the input future.  But as a general rule in guava, all the Futures.java utilities try to propagate cancellation.  The question is

How do we ensure that the initial write to 'input' is visible to cancel()?  

Because input is non-final, there is no guarantee that it will be visible if someone unsafely tunnels the FutureFallback to another thread and calls cancel(). Or are we analyzing the situation incorrectly.

This pattern is common throughout our guava ListenableFuture utilities and as far as we can tell it is a latent bug since we aren't 'safely publishing' our delegating future wrappers

Thanks

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Safe publishing strategy

Martin Buchholz-3
It does look to me like FallbackFuture.running can be accessed via a data race.  The code seems buggy. 

Generally, Future implementations have volatile fields and careful state transitions via CAS, as with FutureTask.

One is tempted to immediately make running volatile.

As for the initial write to running, it will probably (almost?) always be seen in practice, especially on x86, since the future itself is likely to be safely published somehow and there will likely be ordering between the end of the constructor and the write of the reference.  These kinds of races are very difficult to demonstrate in practice.

On Thu, Jan 22, 2015 at 5:02 PM, Luke Sandberg <[hidden email]> wrote:
Guava has this method:

Futures.withFallback

which is implemented via a delegating future:

private static class FallbackFuture<V> extends AbstractFuture.TrustedFuture<V> {
    ListenableFuture<? extends V> input;

    FallbackFuture(ListenableFuture<? extends V> input,
        final FutureFallback<? extends V> fallback,
        final Executor executor) {
      input = input;
      /// a bunch of stuff with listeners
    }

  @Override
    public boolean cancel(boolean mayInterruptIfRunning) {
      ListenableFuture<?> local = this.input;
      if (super.cancel(mayInterruptIfRunning)) {
        if (local != null) {
          local.cancel(mayInterruptIfRunning);
        }
        return true;
      }
      return false;
   }
}

This future does a lot of stuff to handle recovering from failure of the input future.  But as a general rule in guava, all the Futures.java utilities try to propagate cancellation.  The question is

How do we ensure that the initial write to 'input' is visible to cancel()?  

Because input is non-final, there is no guarantee that it will be visible if someone unsafely tunnels the FutureFallback to another thread and calls cancel(). Or are we analyzing the situation incorrectly.

This pattern is common throughout our guava ListenableFuture utilities and as far as we can tell it is a latent bug since we aren't 'safely publishing' our delegating future wrappers

Thanks

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest



_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Safe publishing strategy

Vitaly Davidovich

So I think we've covered this before on this list, but until JMM is revised, a volatile wouldn't technically prevent that type of reordering here.  However, it appears that most (all?) JVMs treat volatile like final in constructor.

However, adding volatile in this case could hurt performance as, unlike the AtomicXXX case discussed on the other thread, I'm assuming these wrapper futures are constructed a lot.  It's also a waste if nobody actually publishes unsafely.

sent from my phone

On Jan 22, 2015 11:24 PM, "Martin Buchholz" <[hidden email]> wrote:
It does look to me like FallbackFuture.running can be accessed via a data race.  The code seems buggy. 

Generally, Future implementations have volatile fields and careful state transitions via CAS, as with FutureTask.

One is tempted to immediately make running volatile.

As for the initial write to running, it will probably (almost?) always be seen in practice, especially on x86, since the future itself is likely to be safely published somehow and there will likely be ordering between the end of the constructor and the write of the reference.  These kinds of races are very difficult to demonstrate in practice.

On Thu, Jan 22, 2015 at 5:02 PM, Luke Sandberg <[hidden email]> wrote:
Guava has this method:

Futures.withFallback

which is implemented via a delegating future:

private static class FallbackFuture<V> extends AbstractFuture.TrustedFuture<V> {
    ListenableFuture<? extends V> input;

    FallbackFuture(ListenableFuture<? extends V> input,
        final FutureFallback<? extends V> fallback,
        final Executor executor) {
      input = input;
      /// a bunch of stuff with listeners
    }

  @Override
    public boolean cancel(boolean mayInterruptIfRunning) {
      ListenableFuture<?> local = this.input;
      if (super.cancel(mayInterruptIfRunning)) {
        if (local != null) {
          local.cancel(mayInterruptIfRunning);
        }
        return true;
      }
      return false;
   }
}

This future does a lot of stuff to handle recovering from failure of the input future.  But as a general rule in guava, all the Futures.java utilities try to propagate cancellation.  The question is

How do we ensure that the initial write to 'input' is visible to cancel()?  

Because input is non-final, there is no guarantee that it will be visible if someone unsafely tunnels the FutureFallback to another thread and calls cancel(). Or are we analyzing the situation incorrectly.

This pattern is common throughout our guava ListenableFuture utilities and as far as we can tell it is a latent bug since we aren't 'safely publishing' our delegating future wrappers

Thanks

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest



_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Safe publishing strategy

Luke Sandberg
exactly.  We discussed adding volatile, but then realized it wouldn't fix it.  We are pretty sure that in practice it doesn't matter (since it essentially requires a 'malicious' caller)

The only 'state transition' we require in this case is 'fully constructed' which doesn't seem like a complicated thing to achieve but all the ways i can think of doing it add complexity and/or overhead.  e.g. we could add a method like this:

static final class Wrapper<T> {
  volatile T t;
}
static <T> T safelyPublish(T t) {
 Wrapper wrapper = new Wrapper<T> ();
 wrapper.t = t;
 return wrapper.t;
}

Now, i think, that passing any 'unsafely constructed' object through this helper would create sufficient happens-before edges that this code would be JMM compliant. 

Other ideas we discussed:

1. instead of 'nulling out' the running field, we could assign a special tombstone value and then cancel() could spin until it reads a non-null value.  This would require running to be volatile
2. we could wrap the read/write of running in a synchronized block
3. we could use some kind of synchronizer at the end of the constructor (e.g. countdown a countdownlatch at the end of the constructor and then call await() at the beginning of cancel())

All of these seem pretty complex/high overhead to solve this (afaik only theoretical) problem.  Does anyone have other ideas?  How bad would it be to just rely on our callers not to unsafely publish?  Is there any precedent for documenting something like this?

On Thu, Jan 22, 2015 at 8:32 PM, Vitaly Davidovich <[hidden email]> wrote:

So I think we've covered this before on this list, but until JMM is revised, a volatile wouldn't technically prevent that type of reordering here.  However, it appears that most (all?) JVMs treat volatile like final in constructor.

However, adding volatile in this case could hurt performance as, unlike the AtomicXXX case discussed on the other thread, I'm assuming these wrapper futures are constructed a lot.  It's also a waste if nobody actually publishes unsafely.

sent from my phone

On Jan 22, 2015 11:24 PM, "Martin Buchholz" <[hidden email]> wrote:
It does look to me like FallbackFuture.running can be accessed via a data race.  The code seems buggy. 

Generally, Future implementations have volatile fields and careful state transitions via CAS, as with FutureTask.

One is tempted to immediately make running volatile.

As for the initial write to running, it will probably (almost?) always be seen in practice, especially on x86, since the future itself is likely to be safely published somehow and there will likely be ordering between the end of the constructor and the write of the reference.  These kinds of races are very difficult to demonstrate in practice.

On Thu, Jan 22, 2015 at 5:02 PM, Luke Sandberg <[hidden email]> wrote:
Guava has this method:

Futures.withFallback

which is implemented via a delegating future:

private static class FallbackFuture<V> extends AbstractFuture.TrustedFuture<V> {
    ListenableFuture<? extends V> input;

    FallbackFuture(ListenableFuture<? extends V> input,
        final FutureFallback<? extends V> fallback,
        final Executor executor) {
      input = input;
      /// a bunch of stuff with listeners
    }

  @Override
    public boolean cancel(boolean mayInterruptIfRunning) {
      ListenableFuture<?> local = this.input;
      if (super.cancel(mayInterruptIfRunning)) {
        if (local != null) {
          local.cancel(mayInterruptIfRunning);
        }
        return true;
      }
      return false;
   }
}

This future does a lot of stuff to handle recovering from failure of the input future.  But as a general rule in guava, all the Futures.java utilities try to propagate cancellation.  The question is

How do we ensure that the initial write to 'input' is visible to cancel()?  

Because input is non-final, there is no guarantee that it will be visible if someone unsafely tunnels the FutureFallback to another thread and calls cancel(). Or are we analyzing the situation incorrectly.

This pattern is common throughout our guava ListenableFuture utilities and as far as we can tell it is a latent bug since we aren't 'safely publishing' our delegating future wrappers

Thanks

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest



_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest



_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Safe publishing strategy

Vitaly Davidovich
What about making the field final but instead of storing the future directly, as now, store it in a wrapper class, so e.g.:

static final class Wrapper<T> {
    ListenableFuture<T> input;
    public Wrapper(ListenableFuture<T> input) { this.input = input; }
}

private static class FallbackFuture<V> extends AbstractFuture.TrustedFuture<V> {
    final Wrapper<? extends V> wrapper;

    FallbackFuture(ListenableFuture<? extends V> input,
        final FutureFallback<? extends V> fallback,
        final Executor executor) {
      wrapper = new Wrapper(input);
      /// a bunch of stuff with listeners
    }

  @Override
    public boolean cancel(boolean mayInterruptIfRunning) {
      ListenableFuture<?> local = this.wrapper.input;
      if (super.cancel(mayInterruptIfRunning)) {
        if (local != null) {
          local.cancel(mayInterruptIfRunning);
        }
        return true;
      }
      return false;
   }
}

This is somewhat similar to your Wrapper except there's no volatile and safePublish().

As for "policy" on racy publishing, I think common convention is you only document if you *allow* racy publication -- default assumption by users should be that it's not safe.

On Fri, Jan 23, 2015 at 10:10 AM, Luke Sandberg <[hidden email]> wrote:
exactly.  We discussed adding volatile, but then realized it wouldn't fix it.  We are pretty sure that in practice it doesn't matter (since it essentially requires a 'malicious' caller)

The only 'state transition' we require in this case is 'fully constructed' which doesn't seem like a complicated thing to achieve but all the ways i can think of doing it add complexity and/or overhead.  e.g. we could add a method like this:

static final class Wrapper<T> {
  volatile T t;
}
static <T> T safelyPublish(T t) {
 Wrapper wrapper = new Wrapper<T> ();
 wrapper.t = t;
 return wrapper.t;
}

Now, i think, that passing any 'unsafely constructed' object through this helper would create sufficient happens-before edges that this code would be JMM compliant. 

Other ideas we discussed:

1. instead of 'nulling out' the running field, we could assign a special tombstone value and then cancel() could spin until it reads a non-null value.  This would require running to be volatile
2. we could wrap the read/write of running in a synchronized block
3. we could use some kind of synchronizer at the end of the constructor (e.g. countdown a countdownlatch at the end of the constructor and then call await() at the beginning of cancel())

All of these seem pretty complex/high overhead to solve this (afaik only theoretical) problem.  Does anyone have other ideas?  How bad would it be to just rely on our callers not to unsafely publish?  Is there any precedent for documenting something like this?

On Thu, Jan 22, 2015 at 8:32 PM, Vitaly Davidovich <[hidden email]> wrote:

So I think we've covered this before on this list, but until JMM is revised, a volatile wouldn't technically prevent that type of reordering here.  However, it appears that most (all?) JVMs treat volatile like final in constructor.

However, adding volatile in this case could hurt performance as, unlike the AtomicXXX case discussed on the other thread, I'm assuming these wrapper futures are constructed a lot.  It's also a waste if nobody actually publishes unsafely.

sent from my phone

On Jan 22, 2015 11:24 PM, "Martin Buchholz" <[hidden email]> wrote:
It does look to me like FallbackFuture.running can be accessed via a data race.  The code seems buggy. 

Generally, Future implementations have volatile fields and careful state transitions via CAS, as with FutureTask.

One is tempted to immediately make running volatile.

As for the initial write to running, it will probably (almost?) always be seen in practice, especially on x86, since the future itself is likely to be safely published somehow and there will likely be ordering between the end of the constructor and the write of the reference.  These kinds of races are very difficult to demonstrate in practice.

On Thu, Jan 22, 2015 at 5:02 PM, Luke Sandberg <[hidden email]> wrote:
Guava has this method:

Futures.withFallback

which is implemented via a delegating future:

private static class FallbackFuture<V> extends AbstractFuture.TrustedFuture<V> {
    ListenableFuture<? extends V> input;

    FallbackFuture(ListenableFuture<? extends V> input,
        final FutureFallback<? extends V> fallback,
        final Executor executor) {
      input = input;
      /// a bunch of stuff with listeners
    }

  @Override
    public boolean cancel(boolean mayInterruptIfRunning) {
      ListenableFuture<?> local = this.input;
      if (super.cancel(mayInterruptIfRunning)) {
        if (local != null) {
          local.cancel(mayInterruptIfRunning);
        }
        return true;
      }
      return false;
   }
}

This future does a lot of stuff to handle recovering from failure of the input future.  But as a general rule in guava, all the Futures.java utilities try to propagate cancellation.  The question is

How do we ensure that the initial write to 'input' is visible to cancel()?  

Because input is non-final, there is no guarantee that it will be visible if someone unsafely tunnels the FutureFallback to another thread and calls cancel(). Or are we analyzing the situation incorrectly.

This pattern is common throughout our guava ListenableFuture utilities and as far as we can tell it is a latent bug since we aren't 'safely publishing' our delegating future wrappers

Thanks

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest



_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest




_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Safe publishing strategy

Luke Sandberg
Thanks, the wrapper was another strategy discussed (and is likely the best due to simplicity). 

Still it seem lame to have to _allocate_ to fix a visibility issue.  It seems like there should be something similar to the 'freeze action' for non-final fields: http://docs.oracle.com/javase/specs/jls/se7/html/jls-17.html#jls-17.5.1

Also, (just for my own edification) is it actually possible to observe this bug on x86?

Finally, It looks like CompletableFuture avoids these issues by not propagating cancellation and not nulling out input futures (it looks like everything is in a final field).  If we dropped either of those requirements, this issue would go away, unfortunately those are both important features for our users.

On Fri, Jan 23, 2015 at 7:21 AM, Vitaly Davidovich <[hidden email]> wrote:
What about making the field final but instead of storing the future directly, as now, store it in a wrapper class, so e.g.:

static final class Wrapper<T> {
    ListenableFuture<T> input;
    public Wrapper(ListenableFuture<T> input) { this.input = input; }
}

private static class FallbackFuture<V> extends AbstractFuture.TrustedFuture<V> {
    final Wrapper<? extends V> wrapper;

    FallbackFuture(ListenableFuture<? extends V> input,
        final FutureFallback<? extends V> fallback,
        final Executor executor) {
      wrapper = new Wrapper(input);
      /// a bunch of stuff with listeners
    }

  @Override
    public boolean cancel(boolean mayInterruptIfRunning) {
      ListenableFuture<?> local = this.wrapper.input;
      if (super.cancel(mayInterruptIfRunning)) {
        if (local != null) {
          local.cancel(mayInterruptIfRunning);
        }
        return true;
      }
      return false;
   }
}

This is somewhat similar to your Wrapper except there's no volatile and safePublish().

As for "policy" on racy publishing, I think common convention is you only document if you *allow* racy publication -- default assumption by users should be that it's not safe.

On Fri, Jan 23, 2015 at 10:10 AM, Luke Sandberg <[hidden email]> wrote:
exactly.  We discussed adding volatile, but then realized it wouldn't fix it.  We are pretty sure that in practice it doesn't matter (since it essentially requires a 'malicious' caller)

The only 'state transition' we require in this case is 'fully constructed' which doesn't seem like a complicated thing to achieve but all the ways i can think of doing it add complexity and/or overhead.  e.g. we could add a method like this:

static final class Wrapper<T> {
  volatile T t;
}
static <T> T safelyPublish(T t) {
 Wrapper wrapper = new Wrapper<T> ();
 wrapper.t = t;
 return wrapper.t;
}

Now, i think, that passing any 'unsafely constructed' object through this helper would create sufficient happens-before edges that this code would be JMM compliant. 

Other ideas we discussed:

1. instead of 'nulling out' the running field, we could assign a special tombstone value and then cancel() could spin until it reads a non-null value.  This would require running to be volatile
2. we could wrap the read/write of running in a synchronized block
3. we could use some kind of synchronizer at the end of the constructor (e.g. countdown a countdownlatch at the end of the constructor and then call await() at the beginning of cancel())

All of these seem pretty complex/high overhead to solve this (afaik only theoretical) problem.  Does anyone have other ideas?  How bad would it be to just rely on our callers not to unsafely publish?  Is there any precedent for documenting something like this?

On Thu, Jan 22, 2015 at 8:32 PM, Vitaly Davidovich <[hidden email]> wrote:

So I think we've covered this before on this list, but until JMM is revised, a volatile wouldn't technically prevent that type of reordering here.  However, it appears that most (all?) JVMs treat volatile like final in constructor.

However, adding volatile in this case could hurt performance as, unlike the AtomicXXX case discussed on the other thread, I'm assuming these wrapper futures are constructed a lot.  It's also a waste if nobody actually publishes unsafely.

sent from my phone

On Jan 22, 2015 11:24 PM, "Martin Buchholz" <[hidden email]> wrote:
It does look to me like FallbackFuture.running can be accessed via a data race.  The code seems buggy. 

Generally, Future implementations have volatile fields and careful state transitions via CAS, as with FutureTask.

One is tempted to immediately make running volatile.

As for the initial write to running, it will probably (almost?) always be seen in practice, especially on x86, since the future itself is likely to be safely published somehow and there will likely be ordering between the end of the constructor and the write of the reference.  These kinds of races are very difficult to demonstrate in practice.

On Thu, Jan 22, 2015 at 5:02 PM, Luke Sandberg <[hidden email]> wrote:
Guava has this method:

Futures.withFallback

which is implemented via a delegating future:

private static class FallbackFuture<V> extends AbstractFuture.TrustedFuture<V> {
    ListenableFuture<? extends V> input;

    FallbackFuture(ListenableFuture<? extends V> input,
        final FutureFallback<? extends V> fallback,
        final Executor executor) {
      input = input;
      /// a bunch of stuff with listeners
    }

  @Override
    public boolean cancel(boolean mayInterruptIfRunning) {
      ListenableFuture<?> local = this.input;
      if (super.cancel(mayInterruptIfRunning)) {
        if (local != null) {
          local.cancel(mayInterruptIfRunning);
        }
        return true;
      }
      return false;
   }
}

This future does a lot of stuff to handle recovering from failure of the input future.  But as a general rule in guava, all the Futures.java utilities try to propagate cancellation.  The question is

How do we ensure that the initial write to 'input' is visible to cancel()?  

Because input is non-final, there is no guarantee that it will be visible if someone unsafely tunnels the FutureFallback to another thread and calls cancel(). Or are we analyzing the situation incorrectly.

This pattern is common throughout our guava ListenableFuture utilities and as far as we can tell it is a latent bug since we aren't 'safely publishing' our delegating future wrappers

Thanks

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest



_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest





_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Safe publishing strategy

Vitaly Davidovich
Yes, it's definitely lame to allocate here.  Personally, I'd punt on this and rely on users publishing safely.  After all, if your users are unsafe publishing any random class, their app won't work anyway.

x86 does not reorder stores (only store-load combos can appear out of order due to store buffers) in the instruction stream, but compiler could, theoretically, reorder the code such that it first writes the freshly-allocated FallbackFuture to memory and then runs its constructor (basically, what final field semantics are supposed to prevent).



On Fri, Jan 23, 2015 at 11:17 AM, Luke Sandberg <[hidden email]> wrote:
Thanks, the wrapper was another strategy discussed (and is likely the best due to simplicity). 

Still it seem lame to have to _allocate_ to fix a visibility issue.  It seems like there should be something similar to the 'freeze action' for non-final fields: http://docs.oracle.com/javase/specs/jls/se7/html/jls-17.html#jls-17.5.1

Also, (just for my own edification) is it actually possible to observe this bug on x86?

Finally, It looks like CompletableFuture avoids these issues by not propagating cancellation and not nulling out input futures (it looks like everything is in a final field).  If we dropped either of those requirements, this issue would go away, unfortunately those are both important features for our users.

On Fri, Jan 23, 2015 at 7:21 AM, Vitaly Davidovich <[hidden email]> wrote:
What about making the field final but instead of storing the future directly, as now, store it in a wrapper class, so e.g.:

static final class Wrapper<T> {
    ListenableFuture<T> input;
    public Wrapper(ListenableFuture<T> input) { this.input = input; }
}

private static class FallbackFuture<V> extends AbstractFuture.TrustedFuture<V> {
    final Wrapper<? extends V> wrapper;

    FallbackFuture(ListenableFuture<? extends V> input,
        final FutureFallback<? extends V> fallback,
        final Executor executor) {
      wrapper = new Wrapper(input);
      /// a bunch of stuff with listeners
    }

  @Override
    public boolean cancel(boolean mayInterruptIfRunning) {
      ListenableFuture<?> local = this.wrapper.input;
      if (super.cancel(mayInterruptIfRunning)) {
        if (local != null) {
          local.cancel(mayInterruptIfRunning);
        }
        return true;
      }
      return false;
   }
}

This is somewhat similar to your Wrapper except there's no volatile and safePublish().

As for "policy" on racy publishing, I think common convention is you only document if you *allow* racy publication -- default assumption by users should be that it's not safe.

On Fri, Jan 23, 2015 at 10:10 AM, Luke Sandberg <[hidden email]> wrote:
exactly.  We discussed adding volatile, but then realized it wouldn't fix it.  We are pretty sure that in practice it doesn't matter (since it essentially requires a 'malicious' caller)

The only 'state transition' we require in this case is 'fully constructed' which doesn't seem like a complicated thing to achieve but all the ways i can think of doing it add complexity and/or overhead.  e.g. we could add a method like this:

static final class Wrapper<T> {
  volatile T t;
}
static <T> T safelyPublish(T t) {
 Wrapper wrapper = new Wrapper<T> ();
 wrapper.t = t;
 return wrapper.t;
}

Now, i think, that passing any 'unsafely constructed' object through this helper would create sufficient happens-before edges that this code would be JMM compliant. 

Other ideas we discussed:

1. instead of 'nulling out' the running field, we could assign a special tombstone value and then cancel() could spin until it reads a non-null value.  This would require running to be volatile
2. we could wrap the read/write of running in a synchronized block
3. we could use some kind of synchronizer at the end of the constructor (e.g. countdown a countdownlatch at the end of the constructor and then call await() at the beginning of cancel())

All of these seem pretty complex/high overhead to solve this (afaik only theoretical) problem.  Does anyone have other ideas?  How bad would it be to just rely on our callers not to unsafely publish?  Is there any precedent for documenting something like this?

On Thu, Jan 22, 2015 at 8:32 PM, Vitaly Davidovich <[hidden email]> wrote:

So I think we've covered this before on this list, but until JMM is revised, a volatile wouldn't technically prevent that type of reordering here.  However, it appears that most (all?) JVMs treat volatile like final in constructor.

However, adding volatile in this case could hurt performance as, unlike the AtomicXXX case discussed on the other thread, I'm assuming these wrapper futures are constructed a lot.  It's also a waste if nobody actually publishes unsafely.

sent from my phone

On Jan 22, 2015 11:24 PM, "Martin Buchholz" <[hidden email]> wrote:
It does look to me like FallbackFuture.running can be accessed via a data race.  The code seems buggy. 

Generally, Future implementations have volatile fields and careful state transitions via CAS, as with FutureTask.

One is tempted to immediately make running volatile.

As for the initial write to running, it will probably (almost?) always be seen in practice, especially on x86, since the future itself is likely to be safely published somehow and there will likely be ordering between the end of the constructor and the write of the reference.  These kinds of races are very difficult to demonstrate in practice.

On Thu, Jan 22, 2015 at 5:02 PM, Luke Sandberg <[hidden email]> wrote:
Guava has this method:

Futures.withFallback

which is implemented via a delegating future:

private static class FallbackFuture<V> extends AbstractFuture.TrustedFuture<V> {
    ListenableFuture<? extends V> input;

    FallbackFuture(ListenableFuture<? extends V> input,
        final FutureFallback<? extends V> fallback,
        final Executor executor) {
      input = input;
      /// a bunch of stuff with listeners
    }

  @Override
    public boolean cancel(boolean mayInterruptIfRunning) {
      ListenableFuture<?> local = this.input;
      if (super.cancel(mayInterruptIfRunning)) {
        if (local != null) {
          local.cancel(mayInterruptIfRunning);
        }
        return true;
      }
      return false;
   }
}

This future does a lot of stuff to handle recovering from failure of the input future.  But as a general rule in guava, all the Futures.java utilities try to propagate cancellation.  The question is

How do we ensure that the initial write to 'input' is visible to cancel()?  

Because input is non-final, there is no guarantee that it will be visible if someone unsafely tunnels the FutureFallback to another thread and calls cancel(). Or are we analyzing the situation incorrectly.

This pattern is common throughout our guava ListenableFuture utilities and as far as we can tell it is a latent bug since we aren't 'safely publishing' our delegating future wrappers

Thanks

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest



_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest






_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Safe publishing strategy

Aleksey Shipilev-2
In reply to this post by Luke Sandberg
On 23.01.2015 18:10, Luke Sandberg wrote:

> The only 'state transition' we require in this case is 'fully
> constructed' which doesn't seem like a complicated thing to achieve but
> all the ways i can think of doing it add complexity and/or overhead.
>  e.g. we could add a method like this:
>
> static final class Wrapper<T> {
>   volatile T t;
> }
> static <T> T safelyPublish(T t) {
>  Wrapper wrapper = new Wrapper<T> ();
>  wrapper.t = t;
>  return wrapper.t;
> }
>
> Now, i think, that passing any 'unsafely constructed' object through
> this helper would create sufficient happens-before edges that this code
> would be JMM compliant.
Erm, JMM compliant how? If you want a happens-before edge between the
write in one thread, and the read in another thread, you have to have
inter-thread synchronizes-with edge. That is, in this case, you have to
write to volatile in publisher thread, and read from volatile in
consumer thread.

Swizzling the value through this magic safelyPublish method does not
introduce any inter-thread edges. If you still have to leak either the
argument or returned t to another thread through the data race, all bets
are off. If you *received* the t from another thread via the data race,
it is again too late to "sanitize" it with safelyPublish.

In other words, once a racy write had happened, there is NO WAY TO
RECOVER. That ship had sailed. JMM-wise, once you have an unordered
write, any read can see it in any happens-before consistent execution
(modulo causality requirements). There are also happens-before
consistent executions where you don't see that racy write. It's in
limbo, it may or may not come.

Spec-wise, the only escape-hatch way to be resilient in the face of
unsafe publication is final. No final -- no guarantees.


> All of these seem pretty complex/high overhead to solve this (afaik only
> theoretical) problem.  Does anyone have other ideas?  How bad would it
> be to just rely on our callers not to unsafely publish?  Is there any
> precedent for documenting something like this?

This is not a theoretical problem. Well, I think the consensus is
educating users that publishing via data race is very wrong, and should
be avoided at all costs. Only a carefully constructed class can survive
unsafe publication, but one should not generally rely on this because
classes are constructed by humans, and humans do mistakes all the time.
Defense in depth here: protect your classes with finals to recover from
accidents, but don't suggest users to unsafely publish them because of that.

Thanks,
-Aleksey.


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest

signature.asc (836 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Safe publishing strategy

Luke Sandberg
"A write to a volatile field (§8.3.1.4) happens-before every subsequent read of that field."

so by pulling the value out of the volatile field we get an HB edge and program order supplies the rest.  The JMM doesn't require the 'subsequent read' to be on another thread, it just still works if it it.  I think in practice this means that no writes will get reordered past this volatile write.

So that write/read should force all subsequent reads to see all the writes that happened earlier (notably our constructor field assignments). 

On Fri, Jan 23, 2015 at 9:36 AM, Aleksey Shipilev <[hidden email]> wrote:
On 23.01.2015 18:10, Luke Sandberg wrote:
> The only 'state transition' we require in this case is 'fully
> constructed' which doesn't seem like a complicated thing to achieve but
> all the ways i can think of doing it add complexity and/or overhead.
>  e.g. we could add a method like this:
>
> static final class Wrapper<T> {
>   volatile T t;
> }
> static <T> T safelyPublish(T t) {
>  Wrapper wrapper = new Wrapper<T> ();
>  wrapper.t = t;
>  return wrapper.t;
> }
>
> Now, i think, that passing any 'unsafely constructed' object through
> this helper would create sufficient happens-before edges that this code
> would be JMM compliant.

Erm, JMM compliant how? If you want a happens-before edge between the
write in one thread, and the read in another thread, you have to have
inter-thread synchronizes-with edge. That is, in this case, you have to
write to volatile in publisher thread, and read from volatile in
consumer thread.

Swizzling the value through this magic safelyPublish method does not
introduce any inter-thread edges. If you still have to leak either the
argument or returned t to another thread through the data race, all bets
are off. If you *received* the t from another thread via the data race,
it is again too late to "sanitize" it with safelyPublish.

In other words, once a racy write had happened, there is NO WAY TO
RECOVER. That ship had sailed. JMM-wise, once you have an unordered
write, any read can see it in any happens-before consistent execution
(modulo causality requirements). There are also happens-before
consistent executions where you don't see that racy write. It's in
limbo, it may or may not come.

Spec-wise, the only escape-hatch way to be resilient in the face of
unsafe publication is final. No final -- no guarantees.


> All of these seem pretty complex/high overhead to solve this (afaik only
> theoretical) problem.  Does anyone have other ideas?  How bad would it
> be to just rely on our callers not to unsafely publish?  Is there any
> precedent for documenting something like this?

This is not a theoretical problem. Well, I think the consensus is
educating users that publishing via data race is very wrong, and should
be avoided at all costs. Only a carefully constructed class can survive
unsafe publication, but one should not generally rely on this because
classes are constructed by humans, and humans do mistakes all the time.
Defense in depth here: protect your classes with finals to recover from
accidents, but don't suggest users to unsafely publish them because of that.

Thanks,
-Aleksey.



_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Safe publishing strategy

Aleksey Shipilev-2
On 23.01.2015 20:50, Luke Sandberg wrote:
> "A write to a volatile field (§8.3.1.4) happens-before every subsequent
> read of that field."

...

> The JMM doesn't require the
> 'subsequent read' to be on another thread, it just still works if it
> it.  I think in practice this means that no writes will get reordered
> past this volatile write.

But you have to *read* in another thread, if we are trying to reason
about *publication*. There is no sense in discussing publication when
producer and consumer are the same thread.

> So that write/read should force all subsequent reads to see all the
> writes that happened earlier (notably our constructor field assignments).

I agree with that part. My challenge is to employ this for inter-thread
communication. In other words, can you take your code and construct the
example with 2 threads?

>     > static final class Wrapper<T> {
>     >   volatile T t;
>     > }
>     > static <T> T safelyPublish(T t) {
>     >  Wrapper wrapper = new Wrapper<T> ();
>     >  wrapper.t = t;
>     >  return wrapper.t;
>     > }

Thanks,
-Aleksey.


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest

signature.asc (836 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Safe publishing strategy

Luke Sandberg
Where does the JMM say that the read has to happen on another thread to get the HB edge?

On Fri, Jan 23, 2015 at 9:59 AM, Aleksey Shipilev <[hidden email]> wrote:
On 23.01.2015 20:50, Luke Sandberg wrote:
> "A write to a volatile field (§8.3.1.4) happens-before every subsequent
> read of that field."

...

> The JMM doesn't require the
> 'subsequent read' to be on another thread, it just still works if it
> it.  I think in practice this means that no writes will get reordered
> past this volatile write.

But you have to *read* in another thread, if we are trying to reason
about *publication*. There is no sense in discussing publication when
producer and consumer are the same thread.

> So that write/read should force all subsequent reads to see all the
> writes that happened earlier (notably our constructor field assignments).

I agree with that part. My challenge is to employ this for inter-thread
communication. In other words, can you take your code and construct the
example with 2 threads?

>     > static final class Wrapper<T> {
>     >   volatile T t;
>     > }
>     > static <T> T safelyPublish(T t) {
>     >  Wrapper wrapper = new Wrapper<T> ();
>     >  wrapper.t = t;
>     >  return wrapper.t;
>     > }

Thanks,
-Aleksey.



_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Safe publishing strategy

Doug Lea
In reply to this post by Luke Sandberg
On 01/23/2015 11:17 AM, Luke Sandberg wrote:

> Finally, It looks like CompletableFuture avoids these issues by not propagating
> cancellation and not nulling out input futures (it looks like everything is in a
> final field).  If we dropped either of those requirements, this issue would go
> away, unfortunately those are both important features for our users.

CompletableFuture automatically propagates cancellation (and other
exceptions) forward (to dependents), not backwards to sources.
But it is possible to do so using constructions along the lines of:

   void propagateCancel(CompletableFuture<?> f, CompletableFuture<?> source) {
     f.whenComplete((Object r, Throwable ex) ->
       if (f.isCancelled()) source.cancel(true));
   }

(CompletableFuture and other composable futures/promises are
internally very racy, which simplifies things for users but
challenging for implementors.)

-Doug


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Safe publishing strategy

Aleksey Shipilev-2
In reply to this post by Luke Sandberg
On 23.01.2015 21:05, Luke Sandberg wrote:
> Where does the JMM say that the read has to happen on another thread to
> get the HB edge?

I'm sorry, but this is wrong question in this context. I agree the
subsequent reads in the same thread happens-before the preceding writes
in the same thread.

But once again, if we are talking about the publication, then we have to
consider the example when one thread writes (publishes) the object, and
another thread reads (consumes) the object. I invite you to show how
this code helps with safe publication:

>     >     > static final class Wrapper<T> {
>     >     >   volatile T t;
>     >     > }
>     >     > static <T> T safelyPublish(T t) {
>     >     >  Wrapper wrapper = new Wrapper<T> ();
>     >     >  wrapper.t = t;
>     >     >  return wrapper.t;
>     >     > }

That is, who calls safelyPublish -- producer or the consumer? How the
reference to t is communicated between threads? If you want to suggest
this method helps, I would like to see how would a happens-before be
constructed between the creation of T in one thread, and its consumption
in another.

Thanks,
-Aleksey.



_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest

signature.asc (836 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Safe publishing strategy

Luke Sandberg
A simplified example would be
class ForwardingFuture implements Future {
  Future delegate;
  ForwardingFuture(Future delegate) {
    this delegate = checkNotNull(delegate);
  }
...// delegate methods
  boolean cancel(boolean interrupt) {
    return delegate.cancel(interrupt);
  }



static Future unsafe;

T1: unsafe = new ForwardingFuture(someFuture);

T2: 
Future local;
while((local =unsafe) == null) {}  // spin
local.cancel(true);


So how do we make sure that T2 can never get an NPE.  

If we changed T1 to 'unsafe = safelyPublish(new ForwardingFuture(someFuture));'

then that would fix it because it it would insert an HB edge between all the writes to ForwardingFuture.  As would similar tricks involving final fields.  So the producer would be responsible (In this case I am the producer and i cannot control consumers).


On Fri, Jan 23, 2015 at 10:11 AM, Aleksey Shipilev <[hidden email]> wrote:
On 23.01.2015 21:05, Luke Sandberg wrote:
> Where does the JMM say that the read has to happen on another thread to
> get the HB edge?

I'm sorry, but this is wrong question in this context. I agree the
subsequent reads in the same thread happens-before the preceding writes
in the same thread.

But once again, if we are talking about the publication, then we have to
consider the example when one thread writes (publishes) the object, and
another thread reads (consumes) the object. I invite you to show how
this code helps with safe publication:

>     >     > static final class Wrapper<T> {
>     >     >   volatile T t;
>     >     > }
>     >     > static <T> T safelyPublish(T t) {
>     >     >  Wrapper wrapper = new Wrapper<T> ();
>     >     >  wrapper.t = t;
>     >     >  return wrapper.t;
>     >     > }

That is, who calls safelyPublish -- producer or the consumer? How the
reference to t is communicated between threads? If you want to suggest
this method helps, I would like to see how would a happens-before be
constructed between the creation of T in one thread, and its consumption
in another.

Thanks,
-Aleksey.




_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Safe publishing strategy

Vitaly Davidovich
In reply to this post by Aleksey Shipilev-2
He's just trying to emulate final field semantics using volatile store + load in same thread to prevent compiler reordering.  At any rate, as mentioned before, it looks like JMM will be revised to specify that volatile writes in a constructor are treated the same way as final fields, and current JVMs already do that anyway which means you wouldn't need the "fake" load of wrapper.t.

On Fri, Jan 23, 2015 at 1:11 PM, Aleksey Shipilev <[hidden email]> wrote:
On 23.01.2015 21:05, Luke Sandberg wrote:
> Where does the JMM say that the read has to happen on another thread to
> get the HB edge?

I'm sorry, but this is wrong question in this context. I agree the
subsequent reads in the same thread happens-before the preceding writes
in the same thread.

But once again, if we are talking about the publication, then we have to
consider the example when one thread writes (publishes) the object, and
another thread reads (consumes) the object. I invite you to show how
this code helps with safe publication:

>     >     > static final class Wrapper<T> {
>     >     >   volatile T t;
>     >     > }
>     >     > static <T> T safelyPublish(T t) {
>     >     >  Wrapper wrapper = new Wrapper<T> ();
>     >     >  wrapper.t = t;
>     >     >  return wrapper.t;
>     >     > }

That is, who calls safelyPublish -- producer or the consumer? How the
reference to t is communicated between threads? If you want to suggest
this method helps, I would like to see how would a happens-before be
constructed between the creation of T in one thread, and its consumption
in another.

Thanks,
-Aleksey.




_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Safe publishing strategy

Luke Sandberg
In reply to this post by Doug Lea
Would you suggest just switching all these fields to 'volatile' now, since while it isn't correct according to the JMM, it is with the current implementations and will be in the JMM in the future?

On Fri, Jan 23, 2015 at 10:08 AM, Doug Lea <[hidden email]> wrote:
On 01/23/2015 11:17 AM, Luke Sandberg wrote:

Finally, It looks like CompletableFuture avoids these issues by not propagating
cancellation and not nulling out input futures (it looks like everything is in a
final field).  If we dropped either of those requirements, this issue would go
away, unfortunately those are both important features for our users.

CompletableFuture automatically propagates cancellation (and other
exceptions) forward (to dependents), not backwards to sources.
But it is possible to do so using constructions along the lines of:

  void propagateCancel(CompletableFuture<?> f, CompletableFuture<?> source) {
    f.whenComplete((Object r, Throwable ex) ->
      if (f.isCancelled()) source.cancel(true));
  }

(CompletableFuture and other composable futures/promises are
internally very racy, which simplifies things for users but
challenging for implementors.)

-Doug



_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Safe publishing strategy

Martin Buchholz-3
In reply to this post by Luke Sandberg
I think making delegate volatile would fix the problem in practice if not in theory.
To fix it in theory, do:

class ForwardingFuture implements Future {
  final AtomicReference<Future> delegate;
  ForwardingFuture(Future delegate) {
    this delegate = new AtomicReference<Future>(checkNotNull(delegate));
  }
...// delegate methods
  boolean cancel(boolean interrupt) {
    return delegate.get().cancel(interrupt);
  }

If you look deeper, at the implementation of AtomicReference itself, it simply has a volatile field set in the constructor, and there are zero complaints about AtomicReference.get() returning uninitialized null, so in practice, with all JVMs, simply making delegate a volatile Future is sufficient.



On Fri, Jan 23, 2015 at 10:14 AM, Luke Sandberg <[hidden email]> wrote:
A simplified example would be
class ForwardingFuture implements Future {
  Future delegate;
  ForwardingFuture(Future delegate) {
    this delegate = checkNotNull(delegate);
  }
...// delegate methods
  boolean cancel(boolean interrupt) {
    return delegate.cancel(interrupt);
  }



static Future unsafe;

T1: unsafe = new ForwardingFuture(someFuture);

T2: 
Future local;
while((local =unsafe) == null) {}  // spin
local.cancel(true);


So how do we make sure that T2 can never get an NPE.  

If we changed T1 to 'unsafe = safelyPublish(new ForwardingFuture(someFuture));'

then that would fix it because it it would insert an HB edge between all the writes to ForwardingFuture.  As would similar tricks involving final fields.  So the producer would be responsible (In this case I am the producer and i cannot control consumers).


On Fri, Jan 23, 2015 at 10:11 AM, Aleksey Shipilev <[hidden email]> wrote:
On 23.01.2015 21:05, Luke Sandberg wrote:
> Where does the JMM say that the read has to happen on another thread to
> get the HB edge?

I'm sorry, but this is wrong question in this context. I agree the
subsequent reads in the same thread happens-before the preceding writes
in the same thread.

But once again, if we are talking about the publication, then we have to
consider the example when one thread writes (publishes) the object, and
another thread reads (consumes) the object. I invite you to show how
this code helps with safe publication:

>     >     > static final class Wrapper<T> {
>     >     >   volatile T t;
>     >     > }
>     >     > static <T> T safelyPublish(T t) {
>     >     >  Wrapper wrapper = new Wrapper<T> ();
>     >     >  wrapper.t = t;
>     >     >  return wrapper.t;
>     >     > }

That is, who calls safelyPublish -- producer or the consumer? How the
reference to t is communicated between threads? If you want to suggest
this method helps, I would like to see how would a happens-before be
constructed between the creation of T in one thread, and its consumption
in another.

Thanks,
-Aleksey.





_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Safe publishing strategy

Aleksey Shipilev-2
In reply to this post by Luke Sandberg
On 23.01.2015 21:14, Luke Sandberg wrote:

> A simplified example would be
> class ForwardingFuture implements Future {
>   Future delegate;
>   ForwardingFuture(Future delegate) {
>     this delegate = checkNotNull(delegate);
>   }
> ...// delegate methods
>   boolean cancel(boolean interrupt) {
>     return delegate.cancel(interrupt);
>   }
>
>
>
> static Future unsafe;
>
> T1: unsafe = new ForwardingFuture(someFuture);
>
> T2:
> Future local;
> while((local =unsafe) == null) {}  // spin
> local.cancel(true);
>
>
> So how do we make sure that T2 can never get an NPE.  
>
> If we changed T1 to 'unsafe = safelyPublish(new
> ForwardingFuture(someFuture));'
>
> then that would fix it because it it would insert an HB edge between all
> the writes to ForwardingFuture.  
No, that wouldn't fix it. There is no happens-before between the read of
$unsafe in T2, and the write of $unsafe in T1, sorry. It is a race, T2
is not guaranteed to see the $unsafe contents properly, even if it
succeeds in busy-waiting for non-null $unsafe.

To reiterate: you need a HB between *the write* and *the read*.

-Aleksey.


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest

signature.asc (836 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Safe publishing strategy

Martin Buchholz-3
In reply to this post by Martin Buchholz-3
A good summary of "happens-before" is the

Memory Consistency Properties

section of http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/package-summary.html


"""A write to a volatile field happens-before every subsequent read of that same field."""


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Safe publishing strategy

Luke Sandberg
No, that wouldn't fix it. There is no happens-before between the read of
$unsafe in T2, and the write of $unsafe in T1, sorry. It is a race, T2
is not guaranteed to see the $unsafe contents properly, even if it
succeeds in busy-waiting for non-null $unsafe.
To reiterate: you need a HB between *the write* and *the read*.

I don't think so, i already have a read from a volatile.  Anything that anyone does after that (safe or unsafe) with the reference is guaranteed to see a fully constructed instance.  If they were able to see a partially constructed instance, then program order would be violated.

On Fri, Jan 23, 2015 at 10:28 AM, Martin Buchholz <[hidden email]> wrote:
A good summary of "happens-before" is the

Memory Consistency Properties

section of http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/package-summary.html


"""A write to a volatile field happens-before every subsequent read of that same field."""



_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
123