WARNING: This server provides a static reference view of the NetKernel documentation. Links to dynamic content do not work. For the best experience we recommend you install NetKernel and view the documentation in the live system .

Description:Only allow one request at a time but reject old requests that get superseded.
Category:transparent overlay

CutToTheChaseThrottle is a transparent overlay. You must instantiate an instance of the overlay from its prototype, this will create a new instance within your application space.


The mod.architecture.CutToTheChaseThrottle prototype has the following initialisation parameters:

A nested space definition which the overlay will delegate all requests in to.
resource to SOURCE as String to determine throttle instance

Here is an auto-generated example of how to instantiate an instance of CutToTheChaseThrottle:

    < !--wrapped space...-->
Import Requirements

To use CutToTheChaseThrottle transparent overlay you must import the module urn:com:ten60:netkernel:mod:architecture:


The CutToTheChaseThrottle is designed as a mechanism to reduce the processing requirements of dealing with a chatty information sending client who may send many redundant requests. Examples might be sensor values or finger position events on a touch screen. In these cases we can simply cut-to-the-chase and take the last request that was queued as the definitive.


The CutToTheChaseThrottle is implemented as an overlay in the same way as the regular Concurrency Throttle. The throttle will only allow one request to pass at a time and if an additional request is received it will be queued. Once there is a request queued additional requests that are received cause the new request to be queued and the previously queued request to be rejected. The request is rejected by issuing a "Request Rejected" exception response. The currently executing request is unaffected. It might be considered nice to somehow cancel or kill the currently executing request and always start executing the latest received request. This however is not possible at the moment because there are no well defined semantics for terminating requests early except in the heavy handed way that the deadlock detector does (and that completely terminates the whole sub-request tree.)


Here is an example of operation. We have a scenario where 5 requests are issued 100ms apart. The endpoint that can process the request sits behind the throttle and takes 1000ms to process requests. We get a response back from the 5th request after 2000ms rather than the 5000ms it would take without the throttle. The test output has three columns, time, operation, request number.

0000 REQUEST: 0
0000 START: 0
0100 REQUEST: 1
0201 REQUEST: 2
0202 REJECT: 1
0302 REQUEST: 3
0303 REJECT: 2
0404 REQUEST: 4
0404 REJECT: 3
1001 COMPLETE: 0
1001 RELEASE: 4
1002 START: 4
2002 COMPLETE: 4

Operation legend: REQUEST: request issued IMMEDIATE: request is immediately issued by throttle START: endpoint starts processing a request REJECT: throttle rejects request COMPLETE: endpoint completes processing of a request

Key Parameter

One additional feature is the ability to handle multiple throttles keyed on the value of some resource contextual to the request being processed. e.g. the remote host address of a client. This is achieved by specifying an optional key parameter. This parameter defines a resource identifier that will be sourced to obtain the key name of the throttle to use. Throttles are created and destroyed on demand so there is no management complexity.

Here is an example module.xml sample:

    < !-- endpoint(s) to be throttled here -->