About‎ > ‎

Synchronization Architecture

Part of Open Cobalt's special sauce is the collection of object-oriented semantics based on active objects that have the capability of temporal reflection. Each object is aware of, and in direct control, of its behavior in time. Open Cobalt also directly supports replication of computation, allowing computation to be moved close to the point of interaction on demand, while maintaining a consistent view of behaviors that can scale to include thousands of nodes. It does this by using a combination of object semantics along with a modified version of David P. Reed's TeaTime peer-based messaging protocol as a distributed message transactional system enabling replicated computation (synchronization) across multiple peers. This makes replicated computation as easily as replicating data - and makes synchronization of all events across multiple peers a fundamental property of the system.

Owing to these properties, software developers can use Open Cobalt as a way of creating deeply collaborative applications without the effort required to understand how replicated applications work. This reduces the programming overhead required for widespread deployment of deeply capable collaborative virtual spaces. It also makes it possible to deploy and coordinate the activities of virtual worlds on multiple machines without the requirement of maintaining central server resources (other than those needed for specialized data and institutional middleware services).

Open Cobalt's virtual router-dependent implementation of its ordered-group messaging protocol includes: 1) A coordinated universal time-base embedded in the communication protocol, 2) Replicated, versioned objects that unify replicated computation and distribution of results, 3) Replication strategies that separate the mechanisms of replication from the behavioral semantics of objects, 4) Deadline-based scheduling extended with failure and nesting, 5) Coordinated, distributed two-phase commit that is used to control the progress of computations at multiple sites, to provide resilience, deterministic results, and adaptation to available resources, and 6) Use of distributed sets.