Non-volatile reads

classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Non-volatile reads

Bobrowski, Maciej

Let’s consider the class:

 

Foo{

 

   int x = 0;

   volatile int y = 0;

 

  void write(int newVal) {

     x = newVal;

     y = newVal;

  }

 

  int getX(){ return x; }

 

}

 

I would like to assume that the compiler is NOT going to rewrite the getX() to a constant, and it will actually read it from memory/cache and not from a register. Let’s assume one thread is periodically calling write with increasing value, and one other thread is reading x.

 

Q1. Volatile forces ordering and visibility of writes (x and y) across processors. As far as I see it, when volatile write happens, all store buffers of that core will be flushed in an exclusive manner (by obtaining exclusive flag on the processor). The push will invalidate all other cores cache lines that are related to the data written (not sure how though..). Is that correct?

 

Q2. Given the above, after the flushing of the buffers happen, the other thread will be forced to re-read x from main mem (or L3 cache) and update its local value,, effectively seeing the new value. Correct?

 

Q3. Even if y was not volatile, on x-86 the store buffers would eventually be flushed, so eventually reading process would see an updated value of x, perhaps not latest but non-zero value?

 

Thanks for any pointers/comments.





NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers  If you cannot access these links, please notify us by reply message and we will send the contents to you. By communicating with Morgan Stanley you consent to the foregoing and to the voice recording of conversations with personnel of Morgan Stanley.


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Non-volatile reads

Alex Otenko
You are making many assumptions about the behaviour of the compiler without a reference where you got them from. Not that I am asking for references, just pointing out how early your reasoning went wrong.

JMM only guarantees happens-before edges between writing to x (x=newVal) and reading from x (return x), if you have program order between them (same thread writes, then reads), or a happens-before edge between some other instructions - one instruction appearing in program order after write to x and one instruction appearing before reading x in program order.

For example, if you aren’t reading from y as regularly as you read from x in the thread that reads x, JMM does not guarantee anything about visibility of x.


Alex

On 26 Apr 2017, at 16:11, Bobrowski, Maciej <[hidden email]> wrote:


Let’s consider the class:
 
Foo{
 
   int x = 0;
   volatile int y = 0;
 
  void write(int newVal) {
     x = newVal;
     y = newVal;
  }
 
  int getX(){ return x; }
 
}
 
I would like to assume that the compiler is NOT going to rewrite the getX() to a constant, and it will actually read it from memory/cache and not from a register. Let’s assume one thread is periodically calling write with increasing value, and one other thread is reading x.
 
Q1. Volatile forces ordering and visibility of writes (x and y) across processors. As far as I see it, when volatile write happens, all store buffers of that core will be flushed in an exclusive manner (by obtaining exclusive flag on the processor). The push will invalidate all other cores cache lines that are related to the data written (not sure how though..). Is that correct?
 
Q2. Given the above, after the flushing of the buffers happen, the other thread will be forced to re-read x from main mem (or L3 cache) and update its local value,, effectively seeing the new value. Correct?
 
Q3. Even if y was not volatile, on x-86 the store buffers would eventually be flushed, so eventually reading process would see an updated value of x, perhaps not latest but non-zero value?
 
Thanks for any pointers/comments.




NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers  If you cannot access these links, please notify us by reply message and we will send the contents to you. By communicating with Morgan Stanley you consent to the foregoing and to the voice recording of conversations with personnel of Morgan Stanley.





_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Non-volatile reads

Bobrowski, Maciej

Of course, I am  referring to x-86, which forces cache consistency and eventually all writes are flushed from store buffers

 

From: Alex Otenko [mailto:[hidden email]]
Sent: 26 April 2017 16:32
To: Bobrowski, Maciej (IST)
Cc: [hidden email]
Subject: Re: [concurrency-interest] Non-volatile reads

 

You are making many assumptions about the behaviour of the compiler without a reference where you got them from. Not that I am asking for references, just pointing out how early your reasoning went wrong.

 

JMM only guarantees happens-before edges between writing to x (x=newVal) and reading from x (return x), if you have program order between them (same thread writes, then reads), or a happens-before edge between some other instructions - one instruction appearing in program order after write to x and one instruction appearing before reading x in program order.

 

For example, if you aren’t reading from y as regularly as you read from x in the thread that reads x, JMM does not guarantee anything about visibility of x.

 

 

Alex

 

On 26 Apr 2017, at 16:11, Bobrowski, Maciej <[hidden email]> wrote:

 

 

Let’s consider the class:

 

Foo{

 

   int x = 0;

   volatile int y = 0;

 

  void write(int newVal) {

     x = newVal;

     y = newVal;

  }

 

  int getX(){ return x; }

 

}

 

I would like to assume that the compiler is NOT going to rewrite the getX() to a constant, and it will actually read it from memory/cache and not from a register. Let’s assume one thread is periodically calling write with increasing value, and one other thread is reading x.

 

Q1. Volatile forces ordering and visibility of writes (x and y) across processors. As far as I see it, when volatile write happens, all store buffers of that core will be flushed in an exclusive manner (by obtaining exclusive flag on the processor). The push will invalidate all other cores cache lines that are related to the data written (not sure how though..). Is that correct?

 

Q2. Given the above, after the flushing of the buffers happen, the other thread will be forced to re-read x from main mem (or L3 cache) and update its local value,, effectively seeing the new value. Correct?

 

Q3. Even if y was not volatile, on x-86 the store buffers would eventually be flushed, so eventually reading process would see an updated value of x, perhaps not latest but non-zero value?

 

Thanks for any pointers/comments.





NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers  If you cannot access these links, please notify us by reply message and we will send the contents to you. By communicating with Morgan Stanley you consent to the foregoing and to the voice recording of conversations with personnel of Morgan Stanley.

 

 

 

 

 

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest

 





NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers  If you cannot access these links, please notify us by reply message and we will send the contents to you. By communicating with Morgan Stanley you consent to the foregoing and to the voice recording of conversations with personnel of Morgan Stanley.


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Non-volatile reads

Roland Kuhn-2
Hi Maciej,

the issue is not the processor, there are many layers of code transformation before that which you cannot simply assume away. E.g. if you spin on `while (getX() == 0)` then the compiler may just emit an infinite loop since you are clearly not writing to x in the loop.

Regards,

Roland

26 apr. 2017 kl. 17:38 skrev Bobrowski, Maciej <[hidden email]>:


Of course, I am  referring to x-86, which forces cache consistency and eventually all writes are flushed from store buffers
 
From: Alex Otenko [[hidden email]] 
Sent: 26 April 2017 16:32
To: Bobrowski, Maciej (IST)
Cc: [hidden email]
Subject: Re: [concurrency-interest] Non-volatile reads
 
You are making many assumptions about the behaviour of the compiler without a reference where you got them from. Not that I am asking for references, just pointing out how early your reasoning went wrong.
 
JMM only guarantees happens-before edges between writing to x (x=newVal) and reading from x (return x), if you have program order between them (same thread writes, then reads), or a happens-before edge between some other instructions - one instruction appearing in program order after write to x and one instruction appearing before reading x in program order.
 
For example, if you aren’t reading from y as regularly as you read from x in the thread that reads x, JMM does not guarantee anything about visibility of x.
 
 
Alex
 
On 26 Apr 2017, at 16:11, Bobrowski, Maciej <[hidden email]> wrote:
 
 
Let’s consider the class:
 
Foo{
 
   int x = 0;
   volatile int y = 0;
 
  void write(int newVal) {
     x = newVal;
     y = newVal;
  }
 
  int getX(){ return x; }
 
}
 
I would like to assume that the compiler is NOT going to rewrite the getX() to a constant, and it will actually read it from memory/cache and not from a register. Let’s assume one thread is periodically calling write with increasing value, and one other thread is reading x.
 
Q1. Volatile forces ordering and visibility of writes (x and y) across processors. As far as I see it, when volatile write happens, all store buffers of that core will be flushed in an exclusive manner (by obtaining exclusive flag on the processor). The push will invalidate all other cores cache lines that are related to the data written (not sure how though..). Is that correct?
 
Q2. Given the above, after the flushing of the buffers happen, the other thread will be forced to re-read x from main mem (or L3 cache) and update its local value,, effectively seeing the new value. Correct?
 
Q3. Even if y was not volatile, on x-86 the store buffers would eventually be flushed, so eventually reading process would see an updated value of x, perhaps not latest but non-zero value?
 
Thanks for any pointers/comments.




NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers  If you cannot access these links, please notify us by reply message and we will send the contents to you. By communicating with Morgan Stanley you consent to the foregoing and to the voice recording of conversations with personnel of Morgan Stanley.
 
 
 
 
 
_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
 




NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers  If you cannot access these links, please notify us by reply message and we will send the contents to you. By communicating with Morgan Stanley you consent to the foregoing and to the voice recording of conversations with personnel of Morgan Stanley.





_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Non-volatile reads

Bobrowski, Maciej

I agree. The point of this exercise is to understand the low-level processor caches in the presence of volatile. SO let’s assume the variable code is not optimized, the value is read from cache/memory, as described below, I wanted to check if my line of thinking is correct

 

From: Roland Kuhn [mailto:[hidden email]]
Sent: 26 April 2017 16:45
To: Bobrowski, Maciej (IST)
Cc: Alex Otenko; [hidden email]
Subject: Re: [concurrency-interest] Non-volatile reads

 

Hi Maciej,

 

the issue is not the processor, there are many layers of code transformation before that which you cannot simply assume away. E.g. if you spin on `while (getX() == 0)` then the compiler may just emit an infinite loop since you are clearly not writing to x in the loop.

 

Regards,

 

Roland

 

26 apr. 2017 kl. 17:38 skrev Bobrowski, Maciej <[hidden email]>:

 

 

Of course, I am  referring to x-86, which forces cache consistency and eventually all writes are flushed from store buffers

 

From: Alex Otenko [[hidden email]] 
Sent: 26 April 2017 16:32
To: Bobrowski, Maciej (IST)
Cc: [hidden email]
Subject: Re: [concurrency-interest] Non-volatile reads

 

You are making many assumptions about the behaviour of the compiler without a reference where you got them from. Not that I am asking for references, just pointing out how early your reasoning went wrong.

 

JMM only guarantees happens-before edges between writing to x (x=newVal) and reading from x (return x), if you have program order between them (same thread writes, then reads), or a happens-before edge between some other instructions - one instruction appearing in program order after write to x and one instruction appearing before reading x in program order.

 

For example, if you aren’t reading from y as regularly as you read from x in the thread that reads x, JMM does not guarantee anything about visibility of x.

 

 

Alex

 

On 26 Apr 2017, at 16:11, Bobrowski, Maciej <[hidden email]> wrote:

 

 

Let’s consider the class:

 

Foo{

 

   int x = 0;

   volatile int y = 0;

 

  void write(int newVal) {

     x = newVal;

     y = newVal;

  }

 

  int getX(){ return x; }

 

}

 

I would like to assume that the compiler is NOT going to rewrite the getX() to a constant, and it will actually read it from memory/cache and not from a register. Let’s assume one thread is periodically calling write with increasing value, and one other thread is reading x.

 

Q1. Volatile forces ordering and visibility of writes (x and y) across processors. As far as I see it, when volatile write happens, all store buffers of that core will be flushed in an exclusive manner (by obtaining exclusive flag on the processor). The push will invalidate all other cores cache lines that are related to the data written (not sure how though..). Is that correct?

 

Q2. Given the above, after the flushing of the buffers happen, the other thread will be forced to re-read x from main mem (or L3 cache) and update its local value,, effectively seeing the new value. Correct?

 

Q3. Even if y was not volatile, on x-86 the store buffers would eventually be flushed, so eventually reading process would see an updated value of x, perhaps not latest but non-zero value?

 

Thanks for any pointers/comments.






NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers  If you cannot access these links, please notify us by reply message and we will send the contents to you. By communicating with Morgan Stanley you consent to the foregoing and to the voice recording of conversations with personnel of Morgan Stanley.

 

 

 

 

 

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest

 





NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers  If you cannot access these links, please notify us by reply message and we will send the contents to you. By communicating with Morgan Stanley you consent to the foregoing and to the voice recording of conversations with personnel of Morgan Stanley.

 

 

 

 

 

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest

 





NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers  If you cannot access these links, please notify us by reply message and we will send the contents to you. By communicating with Morgan Stanley you consent to the foregoing and to the voice recording of conversations with personnel of Morgan Stanley.


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Non-volatile reads

Aleksey Shipilev-3
In reply to this post by Bobrowski, Maciej
On 04/26/2017 05:11 PM, Bobrowski, Maciej wrote:

> class Foo {
>   int x = 0;
>   volatile int y = 0;
>
>   void write(int newVal) {
>      x = newVal;
>      y = newVal;
>   }
>
>   int getX(){ return x; }
>
> }
>
> I would like to assume that the compiler is NOT going to rewrite the getX() to a
> constant, and it will actually read it from memory/cache and not from a
> register. Let’s assume one thread is periodically calling write with increasing
> value, and one other thread is reading x.
This is actually a very strong assumption, and it would be violated first in
real life cases.


> Q1. Volatile forces ordering and visibility of writes (x and y) across
> processors. As far as I see it, when volatile write happens, all store buffers
> of that core will be flushed in an exclusive manner (by obtaining exclusive flag
> on the processor). The push will invalidate all other cores cache lines that are
> related to the data written (not sure how though..). Is that correct?

Correct for some hypothesis about how Java accesses are compiled down, and how
the hardware works. For example, "volatile store" does not always mean "flusing
the store buffer". The thing about "exclusive" is really up to cache coherency
mechanism given hardware employs. Etc.


> Q2. Given the above, after the flushing of the buffers happen, the other thread
> will be forced to re-read x from main mem (or L3 cache) and update its local
> value,, effectively seeing the new value. Correct?

Correct for some hypothesis how hardware works.

Generic hardware is not obliged to flush the store buffer completely, or in
order. So one can think up the hypothetical hardware example that only flushes
"y", although it would be hard to implement release-acquire there...

Also, if "x" and "y" are on different cache lines, anything that happens to
cache line holding "y" might not happen (or happen late, or even worse) to cache
line holding "x". Reasoning gets simpler if "y" drags the neighbors in its cache
line.

<insert more fantasies here>


> Q3. Even if y was not volatile, on x-86 the store buffers would eventually be
> flushed, so eventually reading process would see an updated value of x, perhaps
> not latest but non-zero value?

Correct, in the belief that everything else was fine.

<insert more fantasies here>


> Thanks for any pointers/comments.

Now, see, "correct, but..." does not mean "always correct".

Read this:
 http://gee.cs.oswego.edu/dl/html/j9mm.html

What you want is JDK 9 VarHandles "opaque" mode that guarantees progress. Read
carefully, and you will spot exactly the case you are asking about.

In current implementation, "opaque" will indeed issue the compiler barriers
only, letting the hardware cache coherency to figure out the rest. But if we
meet the hardware which does not provide this, it will be fixed in the JDK
itself, not in some obscure code in user codebase.

Thanks,
-Aleksey



_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest

signature.asc (836 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Non-volatile reads

Alex Otenko
In reply to this post by Bobrowski, Maciej
That’s a novice way of looking at Java programs.

IF volatile store of y is not optimized out,
IF non-volatile read of x is not optimized out,
then the barriers will be issued, and the value of x will be visible by "the other" thread.

But those two IFs are not necessarily true. JMM allows to optimize both of them out in your example.

Alex

On 26 Apr 2017, at 16:50, Bobrowski, Maciej <[hidden email]> wrote:


I agree. The point of this exercise is to understand the low-level processor caches in the presence of volatile. SO let’s assume the variable code is not optimized, the value is read from cache/memory, as described below, I wanted to check if my line of thinking is correct
 
From: Roland Kuhn [[hidden email]] 
Sent: 26 April 2017 16:45
To: Bobrowski, Maciej (IST)
Cc: Alex Otenko; [hidden email]
Subject: Re: [concurrency-interest] Non-volatile reads
 
Hi Maciej,
 
the issue is not the processor, there are many layers of code transformation before that which you cannot simply assume away. E.g. if you spin on `while (getX() == 0)` then the compiler may just emit an infinite loop since you are clearly not writing to x in the loop.
 
Regards,
 
Roland
 
26 apr. 2017 kl. 17:38 skrev Bobrowski, Maciej <[hidden email]>:
 
 
Of course, I am  referring to x-86, which forces cache consistency and eventually all writes are flushed from store buffers
 
From: Alex Otenko [[hidden email]] 
Sent: 26 April 2017 16:32
To: Bobrowski, Maciej (IST)
Cc: [hidden email]
Subject: Re: [concurrency-interest] Non-volatile reads
 
You are making many assumptions about the behaviour of the compiler without a reference where you got them from. Not that I am asking for references, just pointing out how early your reasoning went wrong.
 
JMM only guarantees happens-before edges between writing to x (x=newVal) and reading from x (return x), if you have program order between them (same thread writes, then reads), or a happens-before edge between some other instructions - one instruction appearing in program order after write to x and one instruction appearing before reading x in program order.
 
For example, if you aren’t reading from y as regularly as you read from x in the thread that reads x, JMM does not guarantee anything about visibility of x.
 
 
Alex
 
On 26 Apr 2017, at 16:11, Bobrowski, Maciej <[hidden email]> wrote:
 
 
Let’s consider the class:
 
Foo{
 
   int x = 0;
   volatile int y = 0;
 
  void write(int newVal) {
     x = newVal;
     y = newVal;
  }
 
  int getX(){ return x; }
 
}
 
I would like to assume that the compiler is NOT going to rewrite the getX() to a constant, and it will actually read it from memory/cache and not from a register. Let’s assume one thread is periodically calling write with increasing value, and one other thread is reading x.
 
Q1. Volatile forces ordering and visibility of writes (x and y) across processors. As far as I see it, when volatile write happens, all store buffers of that core will be flushed in an exclusive manner (by obtaining exclusive flag on the processor). The push will invalidate all other cores cache lines that are related to the data written (not sure how though..). Is that correct?
 
Q2. Given the above, after the flushing of the buffers happen, the other thread will be forced to re-read x from main mem (or L3 cache) and update its local value,, effectively seeing the new value. Correct?
 
Q3. Even if y was not volatile, on x-86 the store buffers would eventually be flushed, so eventually reading process would see an updated value of x, perhaps not latest but non-zero value?
 
Thanks for any pointers/comments.





NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers  If you cannot access these links, please notify us by reply message and we will send the contents to you. By communicating with Morgan Stanley you consent to the foregoing and to the voice recording of conversations with personnel of Morgan Stanley.
 
 
 
 
 
_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
 




NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers  If you cannot access these links, please notify us by reply message and we will send the contents to you. By communicating with Morgan Stanley you consent to the foregoing and to the voice recording of conversations with personnel of Morgan Stanley.
 
 
 
 
 
_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
 




NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers  If you cannot access these links, please notify us by reply message and we will send the contents to you. By communicating with Morgan Stanley you consent to the foregoing and to the voice recording of conversations with personnel of Morgan Stanley.


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Non-volatile reads

Andrew Haley
In reply to this post by Bobrowski, Maciej
On 26/04/17 16:11, Bobrowski, Maciej wrote:
> Q2. Given the above, after the flushing of the buffers happen, the other thread will be forced to re-read x from main mem (or L3 cache) and update its local value,, effectively seeing the new value. Correct?

No.  There is only a volatile write.  For the other thread to see
the volatile write, there has to be a volatile read to synchronize
with.

Andrew.

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Non-volatile reads

Andrew Haley
In reply to this post by Aleksey Shipilev-3
And also I'm going to recommend "Memory Barriers: a Hardware View for
Software Hackers"

http://www.puppetmastertrading.com/images/hwViewForSwHackers.pdf

... just need to get past all of this "flush the cache lines" stuff.

Andrew.
_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Loading...