Distributed Golden Threads work in a similar way to regular Golden Threads.
One or more virtual dependency can be attached to any response generated by an endpoint, then later it can be cut by another process or endpoint to invalidate all dependent resources.
This approach is very useful when a resource is external to the ROC system in a database or external service.
Distributed golden threads share and synchronize golden thread state across an arbitrary number of instances in a hub-spoke pattern.
This allows virtual dependencies to be distributed across a cluster of NetKernel instances and thereby provides a powerful tool to to manage distributed cache consistency.
Two or more hosts must be arranged in a spoke-hub pattern.
One host is configured as a server while the others are clients to the server. Once the pattern is established all hosts operate symmetrically. Each host is able to attach and cut golden threads and all other hosts will be synchronized.
Both client and server are prototypes that must be initialised with an NKP configuration resource. This can either be an inline literal or a regular resource.
Any valid NKP configuration can be used for distributed golden threads. The only constraints are that the client NKP configuration must contain:
Both urn:org:netkernel:mod:hds and urn:com:ten60:netkernel:nkp2 must be imported into the space contained the distributed golden thread prototype for it to operate correctly.
The distributed golden thread endpoints will coalesce cut requests into packets to manage network overhead. A configurable coalesce period must be specified which determines the maximum frequency of messages between hosts. This is currently the only configurable property specified by the config parameter:
It is possible to attach listeners for cut events on one or more golden threads. The prototype exposes an endpoint which supports active:addDGTCutListener and active:removeDGTCutListener. This follows a pattern and grammar symmetrical with the layer 1 Golden Thread Listener.
Listeners are only notified after all connected clients have had their threads cut by the server. This two phase approach ensures that any any recalculation performed by the listener will working with fresh resource state.
Both the DGT Client and Server expose runtime state to allow you to observe and monitor the operation of the golden threads. This state can be seen within the Space Explorer and programmatically.
The DGT Server creates a custom log. It looks all client connects and disconnects as well as every cut.
How to construct a DGT Server instance running on port 8102 using the prototype provided from mod:dgt
How to construct a DGT Client instance connecting to DGT server on host "dgt.server" at port 8102 using the prototype provided from mod:dgt
Both the client and server present a pair of user accessible endpoints for attaching (active:attachDGT) and cutting (active:cutDGT) golden threads. The semantics of these are identical to the local Golden Thread endpoints.
Example attaching distributed golden threads...
req=context.createRequest("active:attachDGT"); req.addArgument("id", "virtual:golden:thread:foo") req.addArgument("id", "virtual:golden:thread:baa") context.issueRequest(req)
Example attaching distributed golden threads...
req=context.createRequest("active:cutDGT"); req.addArgument("id", "virtual:golden:thread:foo") req.addArgument("id", "virtual:golden:thread:baz") context.issueRequest(req)