WARNING: This server provides a static reference view of the NetKernel documentation. Links to dynamic content do not work. For the best experience we recommend you install NetKernel and view the documentation in the live system .

Distributed Golden Thread

Distributed Golden Threads work in a similar way to regular Golden Threads.

One or more virtual dependency can be attached to any response generated by an endpoint, then later it can be cut by another process or endpoint to invalidate all dependent resources.

This approach is very useful when a resource is external to the ROC system in a database or external service.

Distributed golden threads share and synchronize golden thread state across an arbitrary number of instances in a hub-spoke pattern.

This allows virtual dependencies to be distributed across a cluster of NetKernel instances and thereby provides a powerful tool to to manage distributed cache consistency.

Architecture

Two or more hosts must be arranged in a spoke-hub pattern.

One host is configured as a server while the others are clients to the server. Once the pattern is established all hosts operate symmetrically. Each host is able to attach and cut golden threads and all other hosts will be synchronized.

Configuration

Both client and server are prototypes that must be initialised with an NKP configuration resource. This can either be an inline literal or a regular resource.

Any valid NKP configuration can be used for distributed golden threads. The only constraints are that the client NKP configuration must contain:

<exposeRequestScope>true</exposeRequestScope>
and both client and server must contain:
<passByValue>true</passByValue>
This enables the server to push golden thread expiries asynchronously to clients.

Both urn:org:netkernel:mod:hds and urn:com:ten60:netkernel:nkp2 must be imported into the space contained the distributed golden thread prototype for it to operate correctly.

Coalesce

The distributed golden thread endpoints will coalesce cut requests into packets to manage network overhead. A configurable coalesce period must be specified which determines the maximum frequency of messages between hosts. This is currently the only configurable property specified by the config parameter:

<config>
  <coalescePeriod>500</coalescePeriod>
</config>

Listeners

It is possible to attach listeners for cut events on one or more golden threads. The prototype exposes an endpoint which supports active:addDGTCutListener and active:removeDGTCutListener. This follows a pattern and grammar symmetrical with the layer 1 Golden Thread Listener.

Listeners are only notified after all connected clients have had their threads cut by the server. This two phase approach ensures that any any recalculation performed by the listener will working with fresh resource state.

State

Both the DGT Client and Server expose runtime state to allow you to observe and monitor the operation of the golden threads. This state can be seen within the Space Explorer and programmatically.

Logging

The DGT Server creates a custom log. It looks all client connects and disconnects as well as every cut.

Examples

How to construct a DGT Server instance running on port 8102 using the prototype provided from mod:dgt

<rootspace name="DGT Server">
  <endpoint>
    <prototype>DistributedGoldenThreadServer</prototype>
    <config>
      <coalescePeriod>500</coalescePeriod>
    </config>
    <nkpConfig>
      <tunnel factory="com.ten60.netkernel.nkp.netty.NettyNKPTunnelFactory">
        <port>8102</port>
      </tunnel>
      <passByValue>true</passByValue>
    </nkpConfig>
  </endpoint>
  <import>
    <uri>urn:com:ten60:netkernel:mod:dgt</uri>
    <private />
  </import>
  <import>
    <uri>urn:org:netkernel:mod:hds</uri>
    <private />
  </import>
  <import>
    <uri>urn:com:ten60:netkernel:nkp</uri>
    <private />
  </import>
</rootspace>

How to construct a DGT Client instance connecting to DGT server on host "dgt.server" at port 8102 using the prototype provided from mod:dgt

<rootspace name="DGT Client">
  <endpoint>
    <prototype>DistributedGoldenThreadClient</prototype>
    <config>
      <coalescePeriod>500</coalescePeriod>
    </config>
    <nkpConfig>
      <tunnel factory="com.ten60.netkernel.nkp.netty.NettyNKPTunnelFactory">
        <host>localhost</host>
        <port>8102</port>
      </tunnel>
      <exposeRequestScope>true</exposeRequestScope>
      <passByValue>true</passByValue>
    </nkpConfig>
  </endpoint>
  <import>
    <uri>urn:com:ten60:netkernel:mod:dgt</uri>
    <private />
  </import>
  <import>
    <uri>urn:org:netkernel:mod:hds</uri>
    <private />
  </import>
  <import>
    <uri>urn:com:ten60:netkernel:nkp</uri>
    <private />
  </import>
</rootspace>

Usage in Applications

Both the client and server present a pair of user accessible endpoints for attaching (active:attachDGT) and cutting (active:cutDGT) golden threads. The semantics of these are identical to the local Golden Thread endpoints.

Example attaching distributed golden threads...

req=context.createRequest("active:attachDGT");
req.addArgument("id", "virtual:golden:thread:foo")
req.addArgument("id", "virtual:golden:thread:baa")
context.issueRequest(req)

Example attaching distributed golden threads...

req=context.createRequest("active:cutDGT");
req.addArgument("id", "virtual:golden:thread:foo")
req.addArgument("id", "virtual:golden:thread:baz")
context.issueRequest(req)