Re: Resource-based waiting

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Re: Resource-based waiting

Millies, Sebastian
From: Justin Sampson [mailto:[hidden email]]
Sent: Thursday, January 22, 2015 12:49 AM
To: Millies, Sebastian; [hidden email]
Subject: RE: [concurrency-interest] Resource-based waiting (was: Spurious LockSupport.park() return?) [snip]
>if FIFO ordering of requests is absolutely necessary, I think you'll have to arrange for it more explicitly.

From: Vitaly Davidovich [mailto:[hidden email]]
Sent: Thursday, January 22, 2015 2:12 AM
>Have you looked into seeing if Semaphore (with fairness) would help? Your use case sounds like the classical
>resource usage throttle, with heap being the resource.

From: [hidden email] [mailto:[hidden email]] On Behalf Of Benedict Elliott Smith
Sent: Thursday, January 22, 2015 12:05 PM
[snip]
>Alternatively you could build your own with AbstractQueuedLongSynchronizer, by pretty much cutting/pasting
>from Semaphore, but extending AQLS instead of AQS.

thank you all. Having taken all this advice to heart, I have reconsidered the thing.

Yes, FIFO ordering is important. In fact, the desired behavior is actually the following:
Requests are processed in the order in which they arrive.As many requests as will fit into memory may run in
parallel. Requests that do not fit must wait until memory becomes available. Requests that would fit into memory
must nonetheless wait if another request has arrived previously. The scheme is fair, no request will be starved out.

I now use an additional explicit ConcurrentLinkedQueue to buffer all incoming requests before they reach a Semaphore.
Requests must wait until they become head of the queue. Only then can they proceed to a fair semaphore that guards
memory allocation. Only after the request has passed the Semaphore will it remove itself from the queue and notify()
the next one in the queue.

I have decided to follow Benedict's suggestion to copy/paste j.u.c.Semaphore, internally using
AbstractQueuedLongSynchronizer. This seemed simple to do. It also seemed better than introducing yet another
synchronization construct, because everyone already understands a semaphore. I do hope that
AbstractQueuedLongSynchronizer is supported on all platforms?

I wouldn't want to pollute this forum. But if anyone is interested and would be willing to look at my proposed
solution, I'd be very happy to post it here (or send it privately, whatever is preferable).

-- Sebastian


Software AG – Sitz/Registered office: Uhlandstraße 12, 64297 Darmstadt, Germany – Registergericht/Commercial register: Darmstadt HRB 1562 - Vorstand/Management Board: Karl-Heinz Streibich (Vorsitzender/Chairman), Eric Duffaut, Dr. Wolfram Jost, Arnd Zinnhardt; - Aufsichtsratsvorsitzender/Chairman of the Supervisory Board: Dr. Andreas Bereczky - http://www.softwareag.com


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Resource-based waiting

Millies, Sebastian

I believe the semaphore’s behavior is like this:

 

Semaphore sem = new Semaphore(6, true);

 

Thread1: sem.acquire(2); // return immediately

Thread2: sem.acquire(5); // wait, only 4 left

Thread3: sem.acquire(2); // return immediately (***)

 

I do NOT want (***). While thread 2 is waiting for permits, I want all further acquire’s to queue up as well, even if enough permits are available.

 

Or am I mistaken about Semaphore? The docs say:

 

acquire(int permits)Acquires the given number of permits, if they are available, and returns immediately, reducing the number of available permits by the given amount.

 

When fairness is set true, the semaphore guarantees that threads invoking any of the acquire methods are selected to obtain permits in the order in which their invocation of those methods was processed (first-in-first-out; FIFO)

 

I understand that to mean that fairness extends only to the order in which acquires are processed, not to the order in which threads actually obtain permits.

 

n  Sebastian

 

From: Benedict Elliott Smith [mailto:[hidden email]]
Sent: Friday, January 23, 2015 2:53 PM
To: Millies, Sebastian
Cc: concurrency-interest
Subject: Re: [concurrency-interest] Resource-based waiting

 

A fair semaphore is FIFO. That's the definition of its implementation of fairness. So you can just use it directly and not worry about proving your solution's correctness.

On 23 Jan 2015 09:58, "Millies, Sebastian" <[hidden email]> wrote:

From: Justin Sampson [mailto:[hidden email]]
Sent: Thursday, January 22, 2015 12:49 AM
To: Millies, Sebastian; [hidden email]
Subject: RE: [concurrency-interest] Resource-based waiting (was: Spurious LockSupport.park() return?) [snip]
>if FIFO ordering of requests is absolutely necessary, I think you'll have to arrange for it more explicitly.

From: Vitaly Davidovich [mailto:[hidden email]]
Sent: Thursday, January 22, 2015 2:12 AM
>Have you looked into seeing if Semaphore (with fairness) would help? Your use case sounds like the classical
>resource usage throttle, with heap being the resource.

From: [hidden email] [mailto:[hidden email]] On Behalf Of Benedict Elliott Smith
Sent: Thursday, January 22, 2015 12:05 PM
[snip]
>Alternatively you could build your own with AbstractQueuedLongSynchronizer, by pretty much cutting/pasting
>from Semaphore, but extending AQLS instead of AQS.

thank you all. Having taken all this advice to heart, I have reconsidered the thing.

Yes, FIFO ordering is important. In fact, the desired behavior is actually the following:
Requests are processed in the order in which they arrive.As many requests as will fit into memory may run in
parallel. Requests that do not fit must wait until memory becomes available. Requests that would fit into memory
must nonetheless wait if another request has arrived previously. The scheme is fair, no request will be starved out.

I now use an additional explicit ConcurrentLinkedQueue to buffer all incoming requests before they reach a Semaphore.
Requests must wait until they become head of the queue. Only then can they proceed to a fair semaphore that guards
memory allocation. Only after the request has passed the Semaphore will it remove itself from the queue and notify()
the next one in the queue.

I have decided to follow Benedict's suggestion to copy/paste j.u.c.Semaphore, internally using
AbstractQueuedLongSynchronizer. This seemed simple to do. It also seemed better than introducing yet another
synchronization construct, because everyone already understands a semaphore. I do hope that
AbstractQueuedLongSynchronizer is supported on all platforms?

I wouldn't want to pollute this forum. But if anyone is interested and would be willing to look at my proposed
solution, I'd be very happy to post it here (or send it privately, whatever is preferable).

-- Sebastian


Software AG – Sitz/Registered office: Uhlandstraße 12, 64297 Darmstadt, Germany – Registergericht/Commercial register: Darmstadt HRB 1562 - Vorstand/Management Board: Karl-Heinz Streibich (Vorsitzender/Chairman), Eric Duffaut, Dr. Wolfram Jost, Arnd Zinnhardt; - Aufsichtsratsvorsitzender/Chairman of the Supervisory Board: Dr. Andreas Bereczky - http://www.softwareag.com


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Resource-based waiting

Millies, Sebastian

thank you. So I misunderstood Semaphore after all L

n  Sebastian

 

From: Benedict Elliott Smith [mailto:[hidden email]]
Sent: Friday, January 23, 2015 3:15 PM
To: Millies, Sebastian
Cc: concurrency-interest
Subject: Re: [concurrency-interest] Resource-based waiting

 

When fairness is set true, the semaphore guarantees that threads invoking any of the acquire methods are selected to obtain permits in the order in which their invocation of those methods was processed (first-in-first-out; FIFO)

 

This describes exactly the behaviour you desire. Thread3 will not be permitted to acquire until after Thread2 has successfully done so. You can also confirm this by analysis of the FairSync, which returns failure if there are any queued threads ahead of any acquire, regardless of the number of permits available.

 

 

On 23 January 2015 at 14:06, Millies, Sebastian <[hidden email]> wrote:

I believe the semaphore’s behavior is like this:

 

Semaphore sem = new Semaphore(6, true);

 

Thread1: sem.acquire(2); // return immediately

Thread2: sem.acquire(5); // wait, only 4 left

Thread3: sem.acquire(2); // return immediately (***)

 

I do NOT want (***). While thread 2 is waiting for permits, I want all further acquire’s to queue up as well, even if enough permits are available.

 

Or am I mistaken about Semaphore?

 

[snip]

 


Software AG – Sitz/Registered office: Uhlandstraße 12, 64297 Darmstadt, Germany – Registergericht/Commercial register: Darmstadt HRB 1562 - Vorstand/Management Board: Karl-Heinz Streibich (Vorsitzender/Chairman), Eric Duffaut, Dr. Wolfram Jost, Arnd Zinnhardt; - Aufsichtsratsvorsitzender/Chairman of the Supervisory Board: Dr. Andreas Bereczky - http://www.softwareag.com


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Resource-based waiting

Justin Sampson
Sebastian Millies wrote:

> thank you. So I misunderstood Semaphore after all :(

Many folks here did as well until a lengthy discussion just last
month (myself included). Doug has made some changes to the docs
hoping to make it clearer, which you can see in the latest revision
in the jsr166 cvs repository:

http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/src/main/java/util/concurrent/Semaphore.java?revision=1.72&view=markup

If you're interested in the discussion itself, look for the subjects
"Enforcing ordered execution of critical sections?" and "Semaphore
doc bug" in the archives for last month:

http://cs.oswego.edu/pipermail/concurrency-interest/2014-December/thread.html

Cheers,
Justin

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest