Resource-based waiting (was: Spurious LockSupport.park() return?)

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

Resource-based waiting (was: Spurious LockSupport.park() return?)

Millies, Sebastian

everyone, thanks for the explanations regarding LockSupport.park().

 

Please let me describe the problem that the coding I am faced with appears to be trying to solve. Perhaps this rings a bell and someone can point me to some higher-level construct from j.u.c which one might use. My own proposal follows below.

 

·         Concurrent requests are made of a server. There is one thread per request. The server calculates how much heap space will be needed to fulfill each request and keeps track of available memory.

·         Requests that can be served with the available memory may run immediately. Other requests have to wait in a queue.

·         Whenever a request finishes, it wakes up the longest waiting thread to give it a chance to run.

·         Whenever a thread wakes up and finds it can run within the available memory, it removes itself from the queue and also recursively wakes up the next longest waiting thread, to give it a chance to run, too.

 

Doesn’t sound too unusual.

 

The current implementation uses a central coordinating singleton, which keeps track of the memory requirements per thread in a ThreadLocal variable, available memory in an instance variable, and has a BlockingQueue for waiting tasks. These resources are protected using synchronized blocks. But this is interspersed (in code stretches not protected with a lock) with calls to park() and unpark(queue.peek()) for suspending and resuming threads. The solution is obviously incorrect, as described in a previous post.

 

I wonder if a rewrite might not be better than a fix. I was thinking that maybe ReentrantLock (the fair version, because I want the FIFO behavior) would be good to use. I might get a condition from the lock and use Condition.await()/Condition.signal() to suspend/resume threads. I think I might not even need an explicit queue. Is that right? Sort of like this (comments welcome):

 

public class WaitQueue

{

  private final ReentrantLock lock = new ReentrantLock( true ); // use a fair lock.

  private final Condition mayRun = lock.newCondition();

 

  private final ThreadLocal<Long> requiredMemory = new ThreadLocal<Long>();

  private long availableMemory;

 

  public WaitQueue( long availableMemory )

  {

    this.availableMemory = availableMemory;

  }

 

  public final void await() throws InterruptedException

  {

    lock.lock();

    try {

      while( availableMemory < requiredMemory.get() ) {

        mayRun.await();

      }

      availableMemory -= requiredMemory.get();

      mayRun.signal(); // give next in line a chance, too

    }

    finally {

      lock.unlock();

    }

  }

 

  public final void release()

  {

    lock.lock();

    try {

      availableMemory += requiredMemory.get();

      mayRun.signal();

    }

    finally {

      lock.unlock();

    }

  }

 

}

 

I’d use it like this:

 

interface Request

{

  long requiredMemory();

  void serve();

}

 

public void serveRequest( Request req )

{

  requiredMemory.set( req.requiredMemory() );

  try {

    await();

  }

  catch( InterruptedException e ) {

    return; // thread shutdown requested

  }

 

  try {

    req.serve();

  }

  finally {

    release();

    requiredMemory.remove();

  }

}

 

Or should I do something completely different? Is there a standard approach to this kind of problem?

 

Sebastian Millies

PPM, Saarbrücken C1.67, +49 681 210 3221

 


Software AG – Sitz/Registered office: Uhlandstraße 12, 64297 Darmstadt, Germany – Registergericht/Commercial register: Darmstadt HRB 1562 - Vorstand/Management Board: Karl-Heinz Streibich (Vorsitzender/Chairman), Eric Duffaut, Dr. Wolfram Jost, Arnd Zinnhardt; - Aufsichtsratsvorsitzender/Chairman of the Supervisory Board: Dr. Andreas Bereczky - http://www.softwareag.com


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Resource-based waiting (was: Spurious LockSupport.park() return?)

Justin Sampson
Sebastian Millies wrote:

> I wonder if a rewrite might not be better than a fix. I was thinking
> that maybe ReentrantLock (the fair version, because I want the FIFO
> behavior) would be good to use. I might get a condition from the lock
> and use Condition.await()/Condition.signal() to suspend/resume
> threads. I think I might not even need an explicit queue. Is that
> right? Sort of like this (comments welcome):

Others more expert than myself may have more ideas for standard
solutions. I personally think you're on the right track with the
rewrite, but I just have a couple of quick comments.

First, a simple signal() won't work in this case because each of the
waiters is really waiting for a _different_ condition. If the signal()
wakes up a thread that is waiting for more than the available memory, it
will go right back to waiting without waking up another thread. You
could use signalAll() or create a Condition for each possible amount
being waited for.

Second, if FIFO ordering of requests is absolutely necessary, I think
you'll have to arrange for it more explicitly. Making a ReentrantLock
"fair" just means that acquiring the lock itself is FIFO ordered. If
threads are waiting and being signalled, they're going back and forth
between the lock queue and the wait queue, which doesn't preserve any
overall ordering.

Cheers,
Justin

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Resource-based waiting (was: Spurious LockSupport.park() return?)

Vitaly Davidovich
In reply to this post by Millies, Sebastian

Have you looked into seeing if Semaphore (with fairness) would help? Your use case sounds like the classical resource usage throttle, with heap being the resource.

sent from my phone

On Jan 21, 2015 6:38 PM, "Millies, Sebastian" <[hidden email]> wrote:

everyone, thanks for the explanations regarding LockSupport.park().

 

Please let me describe the problem that the coding I am faced with appears to be trying to solve. Perhaps this rings a bell and someone can point me to some higher-level construct from j.u.c which one might use. My own proposal follows below.

 

·         Concurrent requests are made of a server. There is one thread per request. The server calculates how much heap space will be needed to fulfill each request and keeps track of available memory.

·         Requests that can be served with the available memory may run immediately. Other requests have to wait in a queue.

·         Whenever a request finishes, it wakes up the longest waiting thread to give it a chance to run.

·         Whenever a thread wakes up and finds it can run within the available memory, it removes itself from the queue and also recursively wakes up the next longest waiting thread, to give it a chance to run, too.

 

Doesn’t sound too unusual.

 

The current implementation uses a central coordinating singleton, which keeps track of the memory requirements per thread in a ThreadLocal variable, available memory in an instance variable, and has a BlockingQueue for waiting tasks. These resources are protected using synchronized blocks. But this is interspersed (in code stretches not protected with a lock) with calls to park() and unpark(queue.peek()) for suspending and resuming threads. The solution is obviously incorrect, as described in a previous post.

 

I wonder if a rewrite might not be better than a fix. I was thinking that maybe ReentrantLock (the fair version, because I want the FIFO behavior) would be good to use. I might get a condition from the lock and use Condition.await()/Condition.signal() to suspend/resume threads. I think I might not even need an explicit queue. Is that right? Sort of like this (comments welcome):

 

public class WaitQueue

{

  private final ReentrantLock lock = new ReentrantLock( true ); // use a fair lock.

  private final Condition mayRun = lock.newCondition();

 

  private final ThreadLocal<Long> requiredMemory = new ThreadLocal<Long>();

  private long availableMemory;

 

  public WaitQueue( long availableMemory )

  {

    this.availableMemory = availableMemory;

  }

 

  public final void await() throws InterruptedException

  {

    lock.lock();

    try {

      while( availableMemory < requiredMemory.get() ) {

        mayRun.await();

      }

      availableMemory -= requiredMemory.get();

      mayRun.signal(); // give next in line a chance, too

    }

    finally {

      lock.unlock();

    }

  }

 

  public final void release()

  {

    lock.lock();

    try {

      availableMemory += requiredMemory.get();

      mayRun.signal();

    }

    finally {

      lock.unlock();

    }

  }

 

}

 

I’d use it like this:

 

interface Request

{

  long requiredMemory();

  void serve();

}

 

public void serveRequest( Request req )

{

  requiredMemory.set( req.requiredMemory() );

  try {

    await();

  }

  catch( InterruptedException e ) {

    return; // thread shutdown requested

  }

 

  try {

    req.serve();

  }

  finally {

    release();

    requiredMemory.remove();

  }

}

 

Or should I do something completely different? Is there a standard approach to this kind of problem?

 

Sebastian Millies

PPM, Saarbrücken C1.67, <a href="tel:%2B49%20681%20210%203221" value="+496812103221" target="_blank">+49 681 210 3221

 


Software AG – Sitz/Registered office: Uhlandstraße 12, 64297 Darmstadt, Germany – Registergericht/Commercial register: Darmstadt HRB 1562 - Vorstand/Management Board: Karl-Heinz Streibich (Vorsitzender/Chairman), Eric Duffaut, Dr. Wolfram Jost, Arnd Zinnhardt; - Aufsichtsratsvorsitzender/Chairman of the Supervisory Board: Dr. Andreas Bereczky - http://www.softwareag.com


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Resource-based waiting (was: Spurious LockSupport.park() return?)

Justin Sampson
Vitaly Davidovich wrote:

> Have you looked into seeing if Semaphore (with fairness) would
> help? Your use case sounds like the classical resource usage
> throttle, with heap being the resource.

Ah, of course! A fair Semaphore would maintain the ordering because
there's no distinction between lock and wait queue.

Indeed, to take a quick flashback, another recent discussion we had
about Semaphore semantics established the advice that if various
threads are passing different values to acquire(N), as they would be
in this case, it ONLY makes sense if the Semaphore is fair. The
behavior is potentially surprising otherwise.

Cheers,
Justin

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Resource-based waiting (was: Spurious LockSupport.park() return?)

Millies, Sebastian
thanks, I have looked at it.

In my case, the number of permits would have to be the number of available memory in bytes. Each thread would acquire multiple permits at once, which should be OK with a fair Semaphore.

However, a Semaphore can only manage an INTEGER number of permits, which would amount to only ca. 2 Gigabytes. Much too low. How could I manage a LONG number of permits?

Sebastian

-----Original Message-----
From: Justin Sampson [mailto:[hidden email]]
Sent: Thursday, January 22, 2015 3:35 AM
To: Vitaly Davidovich; Millies, Sebastian
Cc: [hidden email]
Subject: RE: [concurrency-interest] Resource-based waiting (was: Spurious LockSupport.park() return?)

Vitaly Davidovich wrote:

> Have you looked into seeing if Semaphore (with fairness) would help?
> Your use case sounds like the classical resource usage throttle, with
> heap being the resource.

Ah, of course! A fair Semaphore would maintain the ordering because there's no distinction between lock and wait queue.

Indeed, to take a quick flashback, another recent discussion we had about Semaphore semantics established the advice that if various threads are passing different values to acquire(N), as they would be in this case, it ONLY makes sense if the Semaphore is fair. The behavior is potentially surprising otherwise.

Cheers,
Justin

Software AG – Sitz/Registered office: Uhlandstraße 12, 64297 Darmstadt, Germany – Registergericht/Commercial register: Darmstadt HRB 1562 - Vorstand/Management Board: Karl-Heinz Streibich (Vorsitzender/Chairman), Eric Duffaut, Dr. Wolfram Jost, Arnd Zinnhardt; - Aufsichtsratsvorsitzender/Chairman of the Supervisory Board: Dr. Andreas Bereczky - http://www.softwareag.com


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

FW: Resource-based waiting (was: Spurious LockSupport.park() return?)

Millies, Sebastian
In reply to this post by Justin Sampson
this 1999 implementation of Semaphore still had a LONG number of permits, which is what I need.
http://pag-www.gtisc.gatech.edu/psa/hedc/EDU/oswego/cs/dl/util/concurrent/Semaphore.java.html

Why did that not make it into the JDK?

-- Sebastian

> -----Original Message-----
> From: [hidden email]
> Sent: Thursday, January 22, 2015 9:36 AM
> To: [hidden email]
> Cc: 'Justin Sampson'; Vitaly Davidovich
> Subject: RE: [concurrency-interest] Resource-based waiting (was: Spurious
> LockSupport.park() return?)
>
> thanks, I have looked at it.
>
> In my case, the number of permits would have to be the number of available memory
> in bytes. Each thread would acquire multiple permits at once, which should be OK
> with a fair Semaphore.
>
> However, a Semaphore can only manage an INTEGER number of permits, which
> would amount to only ca. 2 Gigabytes. Much too low. How could I manage a LONG
> number of permits?
>
> Sebastian
>
> -----Original Message-----
> From: Justin Sampson [mailto:[hidden email]]
> Sent: Thursday, January 22, 2015 3:35 AM
> To: Vitaly Davidovich; Millies, Sebastian
> Cc: [hidden email]
> Subject: RE: [concurrency-interest] Resource-based waiting (was: Spurious
> LockSupport.park() return?)
>
> Vitaly Davidovich wrote:
>
> > Have you looked into seeing if Semaphore (with fairness) would help?
> > Your use case sounds like the classical resource usage throttle, with
> > heap being the resource.
>
> Ah, of course! A fair Semaphore would maintain the ordering because there's no
> distinction between lock and wait queue.
>
> Indeed, to take a quick flashback, another recent discussion we had about Semaphore
> semantics established the advice that if various threads are passing different values to
> acquire(N), as they would be in this case, it ONLY makes sense if the Semaphore is
> fair. The behavior is potentially surprising otherwise.
>
> Cheers,
> Justin

Software AG – Sitz/Registered office: Uhlandstraße 12, 64297 Darmstadt, Germany – Registergericht/Commercial register: Darmstadt HRB 1562 - Vorstand/Management Board: Karl-Heinz Streibich (Vorsitzender/Chairman), Eric Duffaut, Dr. Wolfram Jost, Arnd Zinnhardt; - Aufsichtsratsvorsitzender/Chairman of the Supervisory Board: Dr. Andreas Bereczky - http://www.softwareag.com


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: FW: Resource-based waiting (was: Spurious LockSupport.park() return?)

David Holmes-6
Millies writes:
>
> this 1999 implementation of Semaphore still had a LONG number of
> permits, which is what I need.
> http://pag-www.gtisc.gatech.edu/psa/hedc/EDU/oswego/cs/dl/util/con
> current/Semaphore.java.html
>
> Why did that not make it into the JDK?

Because there were some platforms (eg PPC) where atomic long updates require locking as there is no native support for 64-bit atomic updates. This would have caused Semaphore to be terribly slow on such platforms. So the API was changed to use int (this was 2003).

David
 

> -- Sebastian
>
> > -----Original Message-----
> > From: [hidden email]
> > Sent: Thursday, January 22, 2015 9:36 AM
> > To: [hidden email]
> > Cc: 'Justin Sampson'; Vitaly Davidovich
> > Subject: RE: [concurrency-interest] Resource-based waiting
> (was: Spurious
> > LockSupport.park() return?)
> >
> > thanks, I have looked at it.
> >
> > In my case, the number of permits would have to be the number
> of available memory
> > in bytes. Each thread would acquire multiple permits at once,
> which should be OK
> > with a fair Semaphore.
> >
> > However, a Semaphore can only manage an INTEGER number of permits, which
> > would amount to only ca. 2 Gigabytes. Much too low. How could I
> manage a LONG
> > number of permits?
> >
> > Sebastian
> >
> > -----Original Message-----
> > From: Justin Sampson [mailto:[hidden email]]
> > Sent: Thursday, January 22, 2015 3:35 AM
> > To: Vitaly Davidovich; Millies, Sebastian
> > Cc: [hidden email]
> > Subject: RE: [concurrency-interest] Resource-based waiting
> (was: Spurious
> > LockSupport.park() return?)
> >
> > Vitaly Davidovich wrote:
> >
> > > Have you looked into seeing if Semaphore (with fairness) would help?
> > > Your use case sounds like the classical resource usage throttle, with
> > > heap being the resource.
> >
> > Ah, of course! A fair Semaphore would maintain the ordering
> because there's no
> > distinction between lock and wait queue.
> >
> > Indeed, to take a quick flashback, another recent discussion we
> had about Semaphore
> > semantics established the advice that if various threads are
> passing different values to
> > acquire(N), as they would be in this case, it ONLY makes sense
> if the Semaphore is
> > fair. The behavior is potentially surprising otherwise.
> >
> > Cheers,
> > Justin
>
> Software AG � Sitz/Registered office: Uhlandstra�e 12, 64297
> Darmstadt, Germany � Registergericht/Commercial register:
> Darmstadt HRB 1562 - Vorstand/Management Board: Karl-Heinz
> Streibich (Vorsitzender/Chairman), Eric Duffaut, Dr. Wolfram
> Jost, Arnd Zinnhardt; - Aufsichtsratsvorsitzender/Chairman of the
> Supervisory Board: Dr. Andreas Bereczky - http://www.softwareag.com
>
>
> _______________________________________________
> Concurrency-interest mailing list
> [hidden email]
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: FW: Resource-based waiting (was: Spurious LockSupport.park() return?)

Benedict Elliott Smith-3
In reply to this post by Millies, Sebastian
It seems like a simple change would be to allocate to some byte boundary, so multiples of 32 bytes would permit the Semaphore to manage 64Gb.

If you want a drop-in replacement to your park() and unpark() calls that work as you intend, though, Cassandra has a simple coordination primitive called a WaitQueue, which essentially provides single use wrappers around park/unpark without spurious wakeups. Somewhat like AbstractQueuedSynchronizer, but without the imposition that you build a synchronization primitive yourself with it.

You can find a version here and a new variant in my private repository here.

Alternatively you could build your own with AbstractQueuedLongSynchronizer, by pretty much cutting/pasting from Semaphore, but extending AQLS instead of AQS.


On 22 January 2015 at 10:25, Millies, Sebastian <[hidden email]> wrote:
this 1999 implementation of Semaphore still had a LONG number of permits, which is what I need.
http://pag-www.gtisc.gatech.edu/psa/hedc/EDU/oswego/cs/dl/util/concurrent/Semaphore.java.html

Why did that not make it into the JDK?

-- Sebastian

> -----Original Message-----
> From: [hidden email]
> Sent: Thursday, January 22, 2015 9:36 AM
> To: [hidden email]
> Cc: 'Justin Sampson'; Vitaly Davidovich
> Subject: RE: [concurrency-interest] Resource-based waiting (was: Spurious
> LockSupport.park() return?)
>
> thanks, I have looked at it.
>
> In my case, the number of permits would have to be the number of available memory
> in bytes. Each thread would acquire multiple permits at once, which should be OK
> with a fair Semaphore.
>
> However, a Semaphore can only manage an INTEGER number of permits, which
> would amount to only ca. 2 Gigabytes. Much too low. How could I manage a LONG
> number of permits?
>
> Sebastian
>
> -----Original Message-----
> From: Justin Sampson [mailto:[hidden email]]
> Sent: Thursday, January 22, 2015 3:35 AM
> To: Vitaly Davidovich; Millies, Sebastian
> Cc: [hidden email]
> Subject: RE: [concurrency-interest] Resource-based waiting (was: Spurious
> LockSupport.park() return?)
>
> Vitaly Davidovich wrote:
>
> > Have you looked into seeing if Semaphore (with fairness) would help?
> > Your use case sounds like the classical resource usage throttle, with
> > heap being the resource.
>
> Ah, of course! A fair Semaphore would maintain the ordering because there's no
> distinction between lock and wait queue.
>
> Indeed, to take a quick flashback, another recent discussion we had about Semaphore
> semantics established the advice that if various threads are passing different values to
> acquire(N), as they would be in this case, it ONLY makes sense if the Semaphore is
> fair. The behavior is potentially surprising otherwise.
>
> Cheers,
> Justin

Software AG – Sitz/Registered office: Uhlandstraße 12, 64297 Darmstadt, Germany – Registergericht/Commercial register: Darmstadt HRB 1562 - Vorstand/Management Board: Karl-Heinz Streibich (Vorsitzender/Chairman), Eric Duffaut, Dr. Wolfram Jost, Arnd Zinnhardt; - Aufsichtsratsvorsitzender/Chairman of the Supervisory Board: Dr. Andreas Bereczky - http://www.softwareag.com


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Resource-based waiting (was: Spurious LockSupport.park() return?)

oleksandr otenko
In reply to this post by Millies, Sebastian
You want to move signal() to finally

Alex

On 21/01/2015 23:16, Millies, Sebastian wrote:

everyone, thanks for the explanations regarding LockSupport.park().

 

Please let me describe the problem that the coding I am faced with appears to be trying to solve. Perhaps this rings a bell and someone can point me to some higher-level construct from j.u.c which one might use. My own proposal follows below.

 

·         Concurrent requests are made of a server. There is one thread per request. The server calculates how much heap space will be needed to fulfill each request and keeps track of available memory.

·         Requests that can be served with the available memory may run immediately. Other requests have to wait in a queue.

·         Whenever a request finishes, it wakes up the longest waiting thread to give it a chance to run.

·         Whenever a thread wakes up and finds it can run within the available memory, it removes itself from the queue and also recursively wakes up the next longest waiting thread, to give it a chance to run, too.

 

Doesn’t sound too unusual.

 

The current implementation uses a central coordinating singleton, which keeps track of the memory requirements per thread in a ThreadLocal variable, available memory in an instance variable, and has a BlockingQueue for waiting tasks. These resources are protected using synchronized blocks. But this is interspersed (in code stretches not protected with a lock) with calls to park() and unpark(queue.peek()) for suspending and resuming threads. The solution is obviously incorrect, as described in a previous post.

 

I wonder if a rewrite might not be better than a fix. I was thinking that maybe ReentrantLock (the fair version, because I want the FIFO behavior) would be good to use. I might get a condition from the lock and use Condition.await()/Condition.signal() to suspend/resume threads. I think I might not even need an explicit queue. Is that right? Sort of like this (comments welcome):

 

public class WaitQueue

{

  private final ReentrantLock lock = new ReentrantLock( true ); // use a fair lock.

  private final Condition mayRun = lock.newCondition();

 

  private final ThreadLocal<Long> requiredMemory = new ThreadLocal<Long>();

  private long availableMemory;

 

  public WaitQueue( long availableMemory )

  {

    this.availableMemory = availableMemory;

  }

 

  public final void await() throws InterruptedException

  {

    lock.lock();

    try {

      while( availableMemory < requiredMemory.get() ) {

        mayRun.await();

      }

      availableMemory -= requiredMemory.get();

      mayRun.signal(); // give next in line a chance, too

    }

    finally {

      lock.unlock();

    }

  }

 

  public final void release()

  {

    lock.lock();

    try {

      availableMemory += requiredMemory.get();

      mayRun.signal();

    }

    finally {

      lock.unlock();

    }

  }

 

}

 

I’d use it like this:

 

interface Request

{

  long requiredMemory();

  void serve();

}

 

public void serveRequest( Request req )

{

  requiredMemory.set( req.requiredMemory() );

  try {

    await();

  }

  catch( InterruptedException e ) {

    return; // thread shutdown requested

  }

 

  try {

    req.serve();

  }

  finally {

    release();

    requiredMemory.remove();

  }

}

 

Or should I do something completely different? Is there a standard approach to this kind of problem?

 

Sebastian Millies

PPM, Saarbrücken C1.67, +49 681 210 3221

 


Software AG – Sitz/Registered office: Uhlandstraße 12, 64297 Darmstadt, Germany – Registergericht/Commercial register: Darmstadt HRB 1562 - Vorstand/Management Board: Karl-Heinz Streibich (Vorsitzender/Chairman), Eric Duffaut, Dr. Wolfram Jost, Arnd Zinnhardt; - Aufsichtsratsvorsitzender/Chairman of the Supervisory Board: Dr. Andreas Bereczky - http://www.softwareag.com



_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Resource-based waiting (was: Spurious LockSupport.park() return?)

William Louth
In reply to this post by Millies, Sebastian
A more standardized approach to this problem and many other similar problems is resource metering of requests and their control via QoS policies and pools. I wrote an introduction article on this a while back:

http://www.infoq.com/articles/QoS-for-Applications

Since then I've extended the underlying control framework to support resilience via adaptive control valves. You can see this in action across many videos listed here: https://vimeo.com/groups/sentris

The approach used underneath is reservation based though the reservation method can use different measures (or units) depending on the service policy. All the different options available in the QoS framework are supported by a QoSPool interface which could be standardized if the Java/JVM team was to ever consider adding resource management that was contextual based.

William

On 22/01/2015 00:16, Millies, Sebastian wrote:

everyone, thanks for the explanations regarding LockSupport.park().

 

Please let me describe the problem that the coding I am faced with appears to be trying to solve. Perhaps this rings a bell and someone can point me to some higher-level construct from j.u.c which one might use. My own proposal follows below.

 

·         Concurrent requests are made of a server. There is one thread per request. The server calculates how much heap space will be needed to fulfill each request and keeps track of available memory.

·         Requests that can be served with the available memory may run immediately. Other requests have to wait in a queue.

·         Whenever a request finishes, it wakes up the longest waiting thread to give it a chance to run.

·         Whenever a thread wakes up and finds it can run within the available memory, it removes itself from the queue and also recursively wakes up the next longest waiting thread, to give it a chance to run, too.

 

Doesn’t sound too unusual.

 

The current implementation uses a central coordinating singleton, which keeps track of the memory requirements per thread in a ThreadLocal variable, available memory in an instance variable, and has a BlockingQueue for waiting tasks. These resources are protected using synchronized blocks. But this is interspersed (in code stretches not protected with a lock) with calls to park() and unpark(queue.peek()) for suspending and resuming threads. The solution is obviously incorrect, as described in a previous post.

 

I wonder if a rewrite might not be better than a fix. I was thinking that maybe ReentrantLock (the fair version, because I want the FIFO behavior) would be good to use. I might get a condition from the lock and use Condition.await()/Condition.signal() to suspend/resume threads. I think I might not even need an explicit queue. Is that right? Sort of like this (comments welcome):

 

public class WaitQueue

{

  private final ReentrantLock lock = new ReentrantLock( true ); // use a fair lock.

  private final Condition mayRun = lock.newCondition();

 

  private final ThreadLocal<Long> requiredMemory = new ThreadLocal<Long>();

  private long availableMemory;

 

  public WaitQueue( long availableMemory )

  {

    this.availableMemory = availableMemory;

  }

 

  public final void await() throws InterruptedException

  {

    lock.lock();

    try {

      while( availableMemory < requiredMemory.get() ) {

        mayRun.await();

      }

      availableMemory -= requiredMemory.get();

      mayRun.signal(); // give next in line a chance, too

    }

    finally {

      lock.unlock();

    }

  }

 

  public final void release()

  {

    lock.lock();

    try {

      availableMemory += requiredMemory.get();

      mayRun.signal();

    }

    finally {

      lock.unlock();

    }

  }

 

}

 

I’d use it like this:

 

interface Request

{

  long requiredMemory();

  void serve();

}

 

public void serveRequest( Request req )

{

  requiredMemory.set( req.requiredMemory() );

  try {

    await();

  }

  catch( InterruptedException e ) {

    return; // thread shutdown requested

  }

 

  try {

    req.serve();

  }

  finally {

    release();

    requiredMemory.remove();

  }

}

 

Or should I do something completely different? Is there a standard approach to this kind of problem?

 

Sebastian Millies

PPM, Saarbrücken C1.67, +49 681 210 3221

 


Software AG – Sitz/Registered office: Uhlandstraße 12, 64297 Darmstadt, Germany – Registergericht/Commercial register: Darmstadt HRB 1562 - Vorstand/Management Board: Karl-Heinz Streibich (Vorsitzender/Chairman), Eric Duffaut, Dr. Wolfram Jost, Arnd Zinnhardt; - Aufsichtsratsvorsitzender/Chairman of the Supervisory Board: Dr. Andreas Bereczky - http://www.softwareag.com



_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest


_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest