The Dawn Nexus Archives

Archives are generally considered to need to be very precise and comprehensive, authoritative. The distributed databases of contemporary and future network systems applications need fast synchronisation and resilience against quite severe potential damage from both malicious actors and accidental component failures.

The Dawn Nexus Archive is a low latency, arbitrary failure resistant (Byzantine Fault Tolerant) network database synchronisation protocol, which can be used for any database system that needs to be globally available and able to cope with the very hostile network environment of the modern day, combined with just plain old bad luck, without missing a beat or catastrophically failing.

General Concept

The following image illustrates the verification and replication strategy for synchronising a distributed database in the Dawn Network:

This model is inspired by what was first proposed by Eric Freeman in his master's thesis. The flow of the process is as follows:

  1. client sends message to all validators in the network
  2. all validators check what they receive and then send that they received it to all the other validators
  3. all validators vote on each other's signed receipts, 3/5ths makes a majority
  4. if the transaction made it through this, it is added to the log and inserted into the database
  5. every 64 transactions every validator rebroadcasts the last several hundred transactions so all can synchronise to the latest changes and make sure they have them all correct, and not too long after they have been authorised

Differences from a Blockchain

Essentially, we are throwing out several long standing tenets of 'Blockchain' canon:

Firstly, there is no binding between making tokens and making blocks.
Secondly, there is no blocks, just a string of validated transactions, and databases can sync in other ways if so desired (it is much faster than replaying the log).
Lastly, the log is not eternal. Eventually it will be pruned by most nodes, there is potential business in providing an immutable archive, however.

Not a cryptocurrency

We are making no assumptions about issuing tokens, but if a token is a type of data then distributing them is the proper business of the application, not the database replication protocol, that is, the applications decide how monetisation occurs.

It is not envisioned that any part of Nexus involves a ledger, Nexus is rather the platform upon which a ledger, or any distributed application, can be built with and upon. There is plans for a media content monetisation platform that will use a token, but that is outside the scope of this.

By leaving monetisation to applications built with this framework of tools, we allow their easier application to smaller scale installations for commercial and industrial networks as well as a public open network. This by way of the natural flow of funding will go towards the constant improvement of the platform's stability, resistance to new attacks and failure modes, just as today a similar process goes on in funding the IANA and the IEEE and other standards organisations who oversee standardised protocols that allow competitors products to be replaceable.

It is intended that the platform itself will form a framework for distributing a corporate/democratic regulation of the software development process, becoming a Distributed Autonomous Standards Organisation, to make a groovy sounding acronym DASO. This is by way of providing the reference platform and refining the protocols.

Performance is the primary target

It is intended that applications deployed on the network can themselves run networks that function similarly to the main network, but with very different performance characteristics.

Consider the simple example of virtual game worlds. To get optimum performance, all players in a game, in a 'physical' region of the game, should be connected to the closest possible servers. For first person shooter games, the database replication rate has to be much higher, in the range of about 50ms to sync to all nodes connected to players interacting together to get timely hits and positioning.

But for a cryptocurrency, this kind of extreme low latency is not really required, but because there is not time schedule, or random lottery, simply, a transaction is broadcast, it bounces back and forth twice through the validators, and then it becomes canonical, and every node can be in sync, globally, in under 800ms, probably typically more like 500ms, if we say it takes 3 rounds, each round taking a average of around 250ms.

The Forum will even use this protocol, except that it will form arbitrary nodes clusters as message replicators based on regional proximity (low ping between the cluster). The general baseline figure for the number of nodes in a quorum for any application is 25, with a minimum of 15 agreeing to settle a transaction. This ensures there is sufficient copies to ensure availability, while narrowing the amount of traffic that needs to be relayed by each node, economising on storage space.

Automatic Geographical Sharding

Thus, for any given application, it may be that there can be a sharding of the propagation of certified transactions so that they happen at local latency speeds if possible, so to enable fastest possible processing. This simply means that some arbitrary group of nodes become the local validators of the database. This gives you a fairly high confidence of a non fraudulent or other kind of mischief transaction if it is not caught in the local network zone, that it will get caught at the regional or international level.

Any payouts for work done that turns out to be invalid is also purged from the record when the sync digests go through. Based on Steem's traffic rates, this means every 30 seconds or so any invalid transactions will be purged (64 transactions per digest) - more frequently the more heavy the traffic.

As the network grows in size, metrics will be determined what necessary ratios of active users versus amounts of locally accessible validator nodes, and these slots will be automatically advertised, auctioned and issued to users operating the necessary installation, where this role is monetised. The tokens paid for this licence are burned, deleted from the database, but the tokens can be transferred.

Thus, this ability to fragment the consensus latency when it appears all necessary data and correct conditions are in place to allow a transaction locally, the transaction can be sealed and by the time it reaches the global consensus and comes back in the next replay, approximately 1 second, it is settled globally, but it can be settled locally in under 200ms, closing the gap for an attempt to broadcast a contrary transaction elsewhere by the timing.

In the hypothetical case of an attack using a synchronised (somehow) triggering of a transaction within 100ms of the same time on different parts of the fractal trunking system, it is likely that the result within 5 seconds would be that nodes would drop both transactions because of their conflict causing a 50/50 split. A hypothetical three pronged attack would fare even worse. The transactions have to make the 3/5ths quorum or they do not pass into the log.

results matching ""

    No results matching ""