WARNING: This server provides a static reference view of the NetKernel documentation. Links to dynamic content do not work. For the best experience we recommend you install NetKernel and view the documentation in the live system .

Out of the box NetKernel Enterprise Edition comes with an extensive set of tools for distributing ROC solutions.

The core infrastructure is the NetKernel Protocol which allows for the transparent distribution of the NetKernel ROC abstraction over many diverse transport layers including HTTP.

Basic point to point configurations can be easily implemented with NKP client/server endpoints.

More powerful and scalable solutions are also available. These include:

Load Balancing

The NKP Load Balancer enables high availability, scaling and session affinity distributed architectures with configurable load balancing algorithms. The NKP load balancer works with the existing NKP infrastructure so can be introduced as direct swap for an existing point-to-point NKP client endpoint to scale out any part of an ROC solution.

Please contact 1060 Research for more details.

Distributed Caching

The NetKernel cache is optimized to work closely with the Kernel to cache and eliminate redundant computation in the local NK instance. It is analagous to a CPU's L1 cache.

It is also valuable to share state between a cluster of NetKernels using a distributed cache. The NetKernel L2 cache provides shared distributed caching with common Golden Thread consistency, configurable read-write freshness options and plugable persistence implementations.

Please contact 1060 Research for more details.

Distributed Golden Threads

The standard Golden Thread technology provides fine-grained control of cache consistency within the local NetKernel cache.

When distributing solutions in a cluster it is often very powerful to consider Golden Threads as virtual resources that span the cluster.

A comprehensive set of tools and services for managing Distributed Golden Threads is available.

Please contact 1060 Research for more details.