Abstract Stacked Synchronizer?

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Abstract Stacked Synchronizer?

JSR166 Concurrency mailing list
Hi,

I have a ThreadPoolExecutor that showed some surprising behavior.   It is configured to have a core pool size of 10, max size of 400, and a keep alive time of 1 minute.  When the code runs, short lived tasks are executed on the executor at a high rate, about 300 per second.  These tasks only take about 10ms each, so there isn't much work to do.

What I noticed is that spikes in load cause the pool size to go up, but it never goes back down.  This makes sense, but it isn't a desired behavior.   When the rate of tasks submitted goes beyond the queue size, more threads are brought in, take the work, and then contend on the queue.    The queue uses a AbstractQueuedSynchronizer, which enqueues threads to accept more work.  

The problem here is that thread are effectively round-robin as work comes in (even in unfair mode).  Each time a 10ms task comes  in, it's enough time for the other threads to move up in line.   In effect, despite there being hardly enough work to for hundreds of threads, because the load is spread so evenly among them, they never reach the keep alive timeout.  

What I think the solution to this should be is a BlockingQueue that has an unfair Lock.  The least-fair lock that can be made, where threads always cut in line to be at the front.  I believe using a stack instead of a queue in AQS would lower the thread pool size much more aggressively.   Considering how little work is actually coming in, I don't think contention between the threads would be a serious issue.  

Does something like this exist?
_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Abstract Stacked Synchronizer?

JSR166 Concurrency mailing list
TBH it would be interesting to have the size driven by utilization—it looks to be rather straightforward to use Queuing Theory in this case and have each worker keep track of time-spent-servicing vs time-spent-waiting.

On Sun, Mar 29, 2020 at 11:40 PM Carl M via Concurrency-interest <[hidden email]> wrote:
Hi,

I have a ThreadPoolExecutor that showed some surprising behavior.   It is configured to have a core pool size of 10, max size of 400, and a keep alive time of 1 minute.  When the code runs, short lived tasks are executed on the executor at a high rate, about 300 per second.  These tasks only take about 10ms each, so there isn't much work to do.

What I noticed is that spikes in load cause the pool size to go up, but it never goes back down.  This makes sense, but it isn't a desired behavior.   When the rate of tasks submitted goes beyond the queue size, more threads are brought in, take the work, and then contend on the queue.    The queue uses a AbstractQueuedSynchronizer, which enqueues threads to accept more work.   

The problem here is that thread are effectively round-robin as work comes in (even in unfair mode).  Each time a 10ms task comes  in, it's enough time for the other threads to move up in line.   In effect, despite there being hardly enough work to for hundreds of threads, because the load is spread so evenly among them, they never reach the keep alive timeout. 

What I think the solution to this should be is a BlockingQueue that has an unfair Lock.  The least-fair lock that can be made, where threads always cut in line to be at the front.  I believe using a stack instead of a queue in AQS would lower the thread pool size much more aggressively.   Considering how little work is actually coming in, I don't think contention between the threads would be a serious issue.   

Does something like this exist?
_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest


--
Cheers,

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
Reply | Threaded
Open this post in threaded view
|

Re: Abstract Stacked Synchronizer?

JSR166 Concurrency mailing list
The CLR’s thread injection algorithm is fairly interesting in this respect by using hill climbing to determine the thread count of its pool.

On Mon, Mar 30, 2020 at 12:36 AM Viktor Klang via Concurrency-interest <[hidden email]> wrote:
TBH it would be interesting to have the size driven by utilization—it looks to be rather straightforward to use Queuing Theory in this case and have each worker keep track of time-spent-servicing vs time-spent-waiting.

On Sun, Mar 29, 2020 at 11:40 PM Carl M via Concurrency-interest <[hidden email]> wrote:
Hi,

I have a ThreadPoolExecutor that showed some surprising behavior.   It is configured to have a core pool size of 10, max size of 400, and a keep alive time of 1 minute.  When the code runs, short lived tasks are executed on the executor at a high rate, about 300 per second.  These tasks only take about 10ms each, so there isn't much work to do.

What I noticed is that spikes in load cause the pool size to go up, but it never goes back down.  This makes sense, but it isn't a desired behavior.   When the rate of tasks submitted goes beyond the queue size, more threads are brought in, take the work, and then contend on the queue.    The queue uses a AbstractQueuedSynchronizer, which enqueues threads to accept more work.   

The problem here is that thread are effectively round-robin as work comes in (even in unfair mode).  Each time a 10ms task comes  in, it's enough time for the other threads to move up in line.   In effect, despite there being hardly enough work to for hundreds of threads, because the load is spread so evenly among them, they never reach the keep alive timeout. 

What I think the solution to this should be is a BlockingQueue that has an unfair Lock.  The least-fair lock that can be made, where threads always cut in line to be at the front.  I believe using a stack instead of a queue in AQS would lower the thread pool size much more aggressively.   Considering how little work is actually coming in, I don't think contention between the threads would be a serious issue.   

Does something like this exist?
_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest



--
Cheers,
_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest

_______________________________________________
Concurrency-interest mailing list
[hidden email]
http://cs.oswego.edu/mailman/listinfo/concurrency-interest