Skip to main content

Send block lifecycle

How does a send block get memorized in the Nano network#

A bystander look at the C++ reference implementation

I've spent some time looking at the Nano current reference implementation. The codebase is huge so it wasn't an easy task. I wanted to focus on a precise question: what is the lifecycle of a send block? These are my findings.


Since this piece will be about a send block, everything about creating a new chain is out of scope. Let's imagine a user wants to send some raws. My node will create a message with a header similar to this:

network: live
protocol version: 19
message type: publish
block type: send

I will pretend a random user with a balance of 700 raw wants to send 10 raw. If we drill into the block information we'll find something like this:

previous: BBE55A35F79F887...
link/destination: 9A2726664A18FE5...
balance: 690
work: 14b3bc748f2c8e93
signature: B421B88AFBEDFC...

The balance is 690 raw because it was 700 and I'm sending 10 raw. The node then will send this message to its peers.

Another node receive the message#

For each peer there is an already established TCP connection and after a message is processed a new message listener is created. This is how the listener is installed in bootstrap_server.cpp:151

void nano::bootstrap_server::receive ()
// ...
socket->async_read (receive_buffer, 8, [this_l](boost::system::error_code const & ec, size_t size_a) {
// ...
// Receive header
this_l->receive_header_action (ec, size_a);

Which will put whatever we receive through the TCP connection into the receive_buffer. The function receive_header_action is immediately after and reads like this

void nano::bootstrap_server::receive_header_action (boost::system::error_code const & ec, size_t size_a)
if (!ec)
// ...
nano::bufferstream type_stream (receive_buffer->data (), size_a);
auto error (false);
nano::message_header header (error, type_stream);
if (!error)
auto this_l (shared_from_this ());
switch (header.type) {...}
// error management ...

What happens above is that the head of the receive_buffer is assigned to type_stream and type_stream is used to instanciate a message_header class. The logic in the constructor will deserialize the stream and, in particular, will fill the header.type attribute. This is because, provided no error happened, the next thing we do will depend on the header.type (the switch construct). Let's see the case for a publish message.

case nano::message_type::publish:
socket->async_read (receive_buffer, header.payload_length_bytes (), [this_l, header](boost::system::error_code const & ec, size_t size_a) {
this_l->receive_publish_action (ec, size_a, header);

It's installing another listener, on the same buffer. The handler will call the receive_publish_action function in the same file, which validates the work in the carried block. It then adds the message to the requests deque. This will be ultimately processed by the request_response_visitor which in turn puts the message into the entries deque of the tcp_message_manager.

Processing message entries#

At this point the network class enters the stage. When initialized, this class runs the process_messages loop at tcp.cpp:279.

void nano::transport::tcp_channels::process_messages ()
while (!stopped) // while we are not shutting down the node
auto item ( ());
if (item.message != nullptr)
process_message (*item.message, item.endpoint, item.node_id, item.socket, item.type);

Internally the process_message, makes sure we have a channel open with the message originator. Then it creates a network_message_visitor relative to the channel and processes the publish message according to the following function in network.cpp:

void publish (nano::publish const & message_a) override
// ... logging and monitoring logic ...
if (!node.block_processor.full ())
node.process_active (message_a.block);
// ...

where process_active adds the block inside the message to both the block_arrival and the block_processor. The latter is responsible for putting the block into the block dequeue.

Block processing#

Whenever a node class is instantiated it spawns a block processor thread. This thread has an infinite loop in blockprocessor.cpp inside the function process_blocks. This starts a transaction that, after acquiring various locks, processes a batch of blocks. The processing of a single block is defined in the process_one function and relies on a ledger_processor defined in ledger.cpp, at least for the send block we're interested in.

The full logic can be found in ledger.cpp in the send_block function. At its core it's a pyramid of ifs which try to account for all possible things that might go wrong. For example if the work of of the block is sufficient (note that we already checked this when we received the block from another node).

At the top of the pyramid we finally execute the instruction (transaction, hash, block_a);

which physically adds the block to the permanent storage.


This is not the end of the life of this block. In fact it would terminate when the block is cemented. Cementing is a different process that involves consensus, thus the block could be even be deleted if, for example was detected as a double spend. I'll write about this in another article.

How does Nano's peer discovery work?

When a Nano node starts for the first time, it has to work out who to talk to. Nodes generally can be in flux so hard coding IP addresses is not the best idea.

Recently @gurghet added the initial peering code which implements node discovery for the Feeless node, similar to the official Nano node, explained below.

Previously, the Feeless node only accepted a single argument which was another node's IP address. I was using this to connect to the official Nano node running on my PC, by setting it to localhost when working on the Feeless node implementation.

The way Nano node discovery works in the official Nano implementation, and now in Feeless, is via a domain called Presumably the domain is owned by the Nano Foundation.

This domain resolves to multiple A records: 0 IN A 0 IN A 0 IN A 0 IN A 0 IN A 0 IN A 0 IN A 0 IN A

Each one of these are a Nano node. Looking into these IP addresses, they belong to several different ISPs: Digital Ocean, Hetzner, Linode, CloudSigma, netcup and Choopa. On top of that they are located all around the world: India, Finland, United States, Switzerland, Netherlands, Japan, Germany and Australia.

I'm guessing these are nodes controlled by the Nano Foundation, or it could just be hand picked principal representatives, etc.

It looks very well distributed for new nodes to start with. If there's a problem with any of these cloud providers or country's Internet, a node can easily still start synchronizing with the other nodes.

A neat thing about set up is that the Nano Foundation can easily update any new initial nodes to their liking without having to create a new node release. They just need to update the DNS record.

Once a node is connected to a peer, and an exchange of handshakes happen, the peer sends more peers to that node via the Keepalive message, seen below:

feeless node -o
Mar 23 09:48:43.999 INFO feeless::node: Spawning a channel to
Mar 23 09:48:44.029 DEBUG send_handshake:send: feeless::node::controller: OBJ Header { magic_number: 0x52, network: Live, version_max: V18, version_using: V18, version_min: V18, message_type: Handshake, ext: [Query] }
Mar 23 09:48:44.029 DEBUG send_handshake:send: feeless::node::controller: OBJ HandshakeQuery(Cookie(C3FB9659AAF90E371A1B356B47F8C00A1D50276BC08B6EFD0F75F5C9ABCBA869))
Mar 23 09:48:44.088 DEBUG feeless::node::controller: Handshake { query: Some(HandshakeQuery(Cookie(4FD7DFF75EF17B38490E3526D5D05AD625BDD4324E2AEFC62C7B05A56484BC7F))), response: Some(HandshakeResponse { public: Public(46997A6BB19A196BCC7852DD0DC5CFDD0A8FBBACCD7800607288E0E82BC591F7 nano_1jnshbou58isfh89inpx3q4wzqacjyxtsmdr13i97491x1owd6hqy7wdttdx), signature: B8DD2E74EBF201C1996CF98E20A234244D63B44C5D1E34BE483F4F68A8EC
FE228E9C81F063FEDCD21922EC8E7C748BD4F6C47E1F18F8F1DE9E107CEA4411B40D }) }
Mar 23 09:48:44.090 DEBUG send: feeless::node::controller: OBJ Header { magic_number: 0x52, network: Live, version_max: V18, version_using: V18, version_min: V18, message_type: Handshake, ext: [Response] }
Mar 23 09:48:44.090 DEBUG send: feeless::node::controller: OBJ HandshakeResponse { public: Public(CBE85E50353C700AE846E7F945B96021C73F9F2A0BA16C64E8DF0CE6FC47BEB7 nano_3kzadsa5ch5i3dn6fszsapwp1ag99yhkn4x3fjkgjqrewuy6hhoqyu8ds1sf), signature: 31DF62EB8047D7BDD36446E7510F4CFCDB53057825A99E88818052B1876BE17BA89F692A8A3399A5A6B9E6191FF0C54A4B84148821B79362E0633DBEA0940F05 }
Mar 23 09:48:44.185 DEBUG feeless::node::controller: ConfirmReqByHash([RootHashPair { hash: BlockHash(0C0DA8F4F267366B2AC1F4951F662651DB3A5BDC9AB6958BB578D42069011E4C), root: BlockHash(9A89B4D1E74DE9C6396EC5CB10137A769E3B1DF80235C65E00D4A8ED0FD39BF0) }])
Mar 23 09:48:44.256 DEBUG feeless::node::controller: Keepalive([Peer([::]:7075), Peer([::ffff:]:7075), Peer([::ffff:]:7075), Peer([::ffff:]:7075), Peer([::ffff:]:7075), Peer([::ffff:]:7075), Peer([::ffff:]:7075)])

The message sends up to 8 peers back to the node, from there it can keep adding peers and so on.


Welcome to the Feeless documentation and blog. I'm not much of a blogger but will try to keep this section up to date with news and interesting discoveries.

What is this?#

Feeless is an implementation of Nano written in Rust.

I am a huge fan of Nano, and decided to start this project as a way to learn how Nano works internally.

At the time of writing, Feeless can be used as a fully working Rust crate with fairly covered crate documentation (although still progressing).

Feeless also can be used via the command-line to manage a wallet, convert between keys, e.g. private to public to addresses. It can also do transforms between units, e.g. nano to micronano.

The Plan#

Basically the plan is to keep working on the project, making sure it is continuously polished and accurate. The eventual goal is to have a fully functional node that can be used as an alternative to the official C++ implementation.