Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Phase 0 Networking Specifications #763

Merged
merged 11 commits into from
Mar 28, 2019
46 changes: 46 additions & 0 deletions specs/networking/messaging.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
ETH 2.0 Networking Spec - Messaging
===

# Abstract

This specification describes how individual Ethereum 2.0 messages are represented on the wire.

The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL”, NOT", “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119.

# Motivation

This specification seeks to define a messaging protocol that is flexible enough to be changed easily as the ETH 2.0 specification evolves.

Note that while `libp2p` is the chosen networking stack for Ethereum 2.0, as of this writing some clients do not have workable `libp2p` implementations. To allow those clients to communicate, we define a message envelope that includes the body's compression, encoding, and body length. Once `libp2p` is available across all implementations, this message envelope will be removed because `libp2p` will negotiate the values defined in the envelope upfront.

# Specification

## Message Structure

An ETH 2.0 message consists of an envelope that defines the message's compression, encoding, and length followed by the body itself.

Visually, a message looks like this:

```
+--------------------------+
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If other comments are accepted, this enveloping can go away.

| compression nibble |
+--------------------------+
| encoding nibble |
+--------------------------+
| body length (uint64) |
+--------------------------+
| |
| body |
| |
+--------------------------+
```

Clients MUST ignore messages with mal-formed bodies. The compression/encoding nibbles MUST be one of the following values:

## Compression Nibble Values

- `0x0`: no compression
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Permission to add 0x1 for snappy compression please?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above - I don't want to commit teams to an implementation without getting consensus.


## Encoding Nibble Values

- `0x1`: SSZ
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Permission to add 0x2 for BSON please?

Copy link
Contributor

@jannikluhn jannikluhn Mar 23, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe it would be best if we could agree on a single encoding format to maximize compatibility and minimize implementation overhead.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would prefer not, for the same reasons articulated on our call - we should agree together on a single encoding scheme.

32 changes: 32 additions & 0 deletions specs/networking/node-identification.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
ETH 2.0 Networking Spec - Node Identification
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like we don't need to specify anything here as everything's already either part of the referenced EIP or multiaddr.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, will remove.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be appropriate to file an EIP to allocate a key for multiaddrs in the pre-defined key/value table in the ENR standard?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc: @fjl

===

# Abstract

This specification describes how Ethereum 2.0 nodes identify and address each other on the network.

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL", NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119.

# Specification

Clients use Ethereum Node Records (as described in [EIP-778](http://eips.ethereum.org/EIPS/eip-778)) to discover one another. Each ENR includes, among other things, the following keys:

- The node's IP.
- The node's TCP port.
- The node's public key.

For clients to be addressable, their ENR responses MUST contain all of the above keys. Client MUST verify the signature of any received ENRs, and disconnect from peers whose ENR signatures are invalid. Each node's public key MUST be unique.

The keys above are enough to construct a [multiaddr](https://github.com/multiformats/multiaddr) for use with the rest of the `libp2p` stack.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One other consideration maybe: ENR (and Discovery v5) is being designed to support multiple types of identity. It is not going to be a hard requirement that secp256k1 EC pubkeys will identify the node. ENRs will describe the identity type.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

libp2p peer IDs are derived from the public key protobuf, which is just key type + bytes. Here's the spec: libp2p/specs#100. Both SECIO and TLS 1.3 validate peer IDs against the pubkey, so following the spec is important or connections will fail.

Copy link
Contributor

@arnetheduck arnetheduck Mar 18, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I mention in https://github.com/libp2p/specs/pull/100/files#r266291995 - protobuf is not deterministic, and thus not great for feeding into a hashing function or using to determine an ID, unless you used a modified protobuf version that's locked down.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't this be handled at the libp2p layer? Here we're describing how to construct a multiaddr from an ENR; the actual handling of the multiaddr itself and the underlying hash construction would be the responsibiliy of libp2p.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it would, but libp2p itself looks broken in this case - we need to keep an eye on that upstream issue so that we don't spread the breakage further.

Does using ENR require decoding RLP in this context?


It is RECOMMENDED that clients set their TCP port to the default of `9000`.

## Peer ID Generation

The `libp2p` networking stack identifies peers via a "peer ID." Simply put, a node's Peer ID is the SHA2-256 `multihash` of the node's public key struct (serialized in protobuf, refer to the [Peer ID spec](https://github.com/libp2p/specs/pull/100)). `go-libp2p-crypto` contains the canonical implementation of how to hash `secp256k1` keys for use as a peer ID.
mslipper marked this conversation as resolved.
Show resolved Hide resolved

# See Also

- [multiaddr](https://github.com/multiformats/multiaddr)
- [multihash](https://multiformats.io/multihash/)
- [go-libp2p-crypto](https://github.com/libp2p/go-libp2p-crypto)
292 changes: 292 additions & 0 deletions specs/networking/rpc-interface.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,292 @@
ETH 2.0 Networking Spec - RPC Interface
===

# Abstract

The Ethereum 2.0 networking stack uses two modes of communication: a broadcast protocol that gossips information to interested parties via GossipSub, and an RPC protocol that retrieves information from specific clients. This specification defines the RPC protocol.

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL", NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119.

# Dependencies

This specification assumes familiarity with the [Messaging](./messaging.md), [Node Identification](./node-identification), and [Beacon Chain](../core/0_beacon-chain.md) specifications.

# Specification

## Message Schemas

Message body schemas are notated like this:

```
(
field_name_1: type
field_name_2: type
)
```

Embedded types are serialized as SSZ Containers unless otherwise noted.

All referenced data structures can be found in the [0-beacon-chain](https://github.com/ethereum/eth2.0-specs/blob/dev/specs/core/0_beacon-chain.md#data-structures) specification.

## `libp2p` Protocol Names

A "Protocol ID" in `libp2p` parlance refers to a human-readable identifier `libp2p` uses in order to identify sub-protocols and stream messages of different types over the same connection. Peers exchange supported protocol IDs via the `Identify` protocol upon connection. When opening a new stream, peers pin a particular protocol ID to it, and the stream remains contextualised thereafter. Since messages are sent inside a stream, they do not need to bear the protocol ID.

## RPC-Over-`libp2p`

To facilitate RPC-over-`libp2p`, a single protocol name is used: `/eth/serenity/beacon/rpc/1`. The version number in the protocol name is neither backwards or forwards compatible, and will be incremented whenever changes to the below structures are required.

Remote method calls are wrapped in a "request" structure:

```
(
id: uint64
method_id: uint16
body: Request
)
```

and their corresponding responses are wrapped in a "response" structure:
mslipper marked this conversation as resolved.
Show resolved Hide resolved

```
(
id: uint64
response_code: uint16
result: bytes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we just use one structure? You can tell if it's a response by comparing request IDs.

)
```

If an error occurs, a variant of the response structure is returned:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At least with SSZ it's not easily possible to distinguish between normal and error responses, as one needs to know the schema before being able to decode the message. What one could do is have a general response format and then an embedded result/error blob that can be decoded in a second step. E.g.:

Response: (
    id: uint64
    status_code: uint64
    data: bytes
)

SuccessData: (
...
)

ErrorData (
...
)

Not really elegant, but I don't really see a better solution (for SSZ that is).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, this is a good point. SSZ doesn't support null values either - let me think on this one for a little bit and come up with a solution.

Copy link
Contributor Author

@mslipper mslipper Mar 18, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added an is_error boolean field. Note that with SSZ at least you can read the is_error field prior to the contents of the result via offsets. This allows clients to switch the deserialized type based on the is_error value.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the alternative would be to use a list - empty if there's no error, and one item if there is.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just to be clear - when encoding or decoding ssz, there generally exists no provision for skipping fields - even if is_error is false, data must contain bytes. embedding a StatusData in the data field seems to go against the spirit of SSZ generally, as SSZ decoders in general expect to know the exact type of each field, thus would not fit "naturally" in "normal" ssz code.

That said, this issue stems from using SSZ in a wire protocol setting for which it is not.. great.


```
(
id: uint64
response_code: uint16
result: bytes
)
```

The details of the RPC-Over-`libp2p` protocol are similar to [JSON-RPC 2.0](https://www.jsonrpc.org/specification). Specifically:

1. The `id` member is REQUIRED.
mslipper marked this conversation as resolved.
Show resolved Hide resolved
2. The `id` member in the response MUST be the same as the value of the `id` in the request.
mslipper marked this conversation as resolved.
Show resolved Hide resolved
3. The `id` member MUST be unique within the context of a single connection. Monotonically increasing `id`s are RECOMMENDED.
4. The `method_id` member is REQUIRED.
5. The `result` member is REQUIRED on success.
6. The `result` member is OPTIONAL on errors, and MAY contain additional information about the error.
7. `response_code` MUST be `0` on success.

Structuring RPC requests in this manner allows multiple calls and responses to be multiplexed over the same stream without switching. Note that this implies that responses MAY arrive in a different order than requests.

The "method ID" fields in the below messages refer to the `method` field in the request structure above.

The first 1,000 values in `response_code` are reserved for system use. The following response codes are predefined:

1. `0`: No error.
2. `10`: Parse error.
2. `20`: Invalid request.
3. `30`: Method not found.
4. `40`: Server error.

### Alternative for Non-`libp2p` Clients

Since some clients are waiting for `libp2p` implementations in their respective languages. As such, they MAY listen for raw TCP messages on port `9000`. To distinguish RPC messages from other messages on that port, a byte prefix of `ETH` (`0x455448`) MUST be prepended to all messages. This option will be removed once `libp2p` is ready in all supported languages.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should allow using a separate port. It's entirely possible a client will allow both communication modes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, this paragraph should go to the top since it relates to the envelope and port 9000.


## Messages

### Hello

**Method ID:** `0`

**Body**:

```
(
network_id: uint8
chain_id: uint64
latest_finalized_root: bytes32
latest_finalized_epoch: uint64
best_root: bytes32
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Taking into account the backwards sync suggested elsewhere, and that we can use attestations as a (strong) heuristic that a block is valid and useful, it seems prudent to include (some) attestations here - instead of simply supplying some data like best_root that cannot be trusted anyway, a recent attestation would help the connecting client both with head / fork selection and to know with a higher degree of certainty that the root sent "makes sense" and should be downloaded.

The details of this are TBD - but probably we're looking at something like attestations: [Attestation] where it's up to the client to choose a representative and recent set (or none, which is also fine, because then one can listen to broadcasts).

best_slot: uint64
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

slots are based on wall time - what's the best_slot field for?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pretty sure this is supposed to refer to the slot of the head block. Maybe rename best_root and best_slot to head_root and head_slot (or to be even more clear head_block_root/slot)?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think head_block_root and head_slot would be clearer

)
```

Clients exchange `hello` messages upon connection, forming a two-phase handshake. The first message the initiating client sends MUST be the `hello` message. In response, the receiving client MUST respond with its own `hello` message.

Clients SHOULD immediately disconnect from one another following the handshake above under the following conditions:

1. If `network_id` belongs to a different chain, since the client definitionally cannot sync with this client.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should consider spelling out network ID and chain ID as separate fields. Chain ID should be set to a fixed number "1" for ETH, and if others want to run their own chain they can change that ID.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NetworkId vs ChainId +1.
Also, message body compression algorithm indicator.
Also, upgrade paths for SSZ (I get the feeling this might change on the wire)..maybe a sorted list of serialization method preferences, the highest mutual being selected?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still not convinced that we actually need a network id at all and not only a chain id. Especially for RPC as arguably this isn't even a network, just a set of bidirectional connections (as opposed to the gossip layer where we actually relay data).

2. If the `latest_finalized_root` shared by the peer is not in the client's chain at the expected epoch. For example, if Peer 1 in the diagram below has `(root, epoch)` of `(A, 5)` and Peer 2 has `(B, 3)`, Peer 1 would disconnect because it knows that `B` is not the root in their chain at epoch 3:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe clarify that this is (because it can only be) checked by the peer with the higher latest finalized epoch. I tried to come up with a one sentence fix, but it's probably better, to rewrite the whole paragraph from the point of view of one node shaking hands with another node (right now it's talking about both at the same time).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool got it, will do.


```
Root A

+---+
|xxx| +----+ Epoch 5
+-+-+
^
|
+-+-+
| | +----+ Epoch 4
+-+-+
Root B ^
|
+---+ +-+-+
|xxx+<---+--->+ | +----+ Epoch 3
+---+ | +---+
|
+-+-+
| | +-----------+ Epoch 2
+-+-+
^
|
+-+-+
| | +-----------+ Epoch 1
+---+
```

Once the handshake completes, the client with the higher `latest_finalized_epoch` or `best_slot` (if the clients have equal `latest_finalized_epoch`s) SHOULD request beacon block roots from its counterparty via `beacon_block_roots` (i.e., RPC method `10`).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would this be handled if the clients both have equal latest_finalized_epoch and best_slot ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

then you discard the client because you know it's either at genesis or providing invalid data.. finalization happens several epochs behind best_slot in the happy case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arnetheduck ah no, I was referring to if both clients have the same best_slot and finalized_epoch together . So FinalizedEpoch A == FinalizedEpochB and BestSlot A == BestSlot B

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah. right. I find that the situation is somewhat analogous to receiving a block whose parent is unknown to it - you have to make a very similar decision there - the information in this hello, just like in the parentless block, is essentially useless, from a trust perspective, and you need to turn to other sources.

@djrtwo suggested that attestations might be a good heurestic as signing one carries risk for the validator that does so. The information here can, from what I can see, be used to quickly disconnect from a client, if they're saying the network is different. The rest is advisory, and you're hoping for the best.


### Goodbye

**Method ID:** `1`

**Body:**

```
(
reason: uint64
)
```

Client MAY send `goodbye` messages upon disconnection. The reason field MAY be one of the following values:

- `1`: Client shut down.
- `2`: Irrelevant network.
- `3`: Fault/error.

Clients MAY define custom goodbye reasons as long as the value is larger than `1000`.

### Get Status

**Method ID:** `2`

**Request Body:**

```
(
sha: bytes32
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't ask when this got drafted, sorry. What is sha here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The commit hash of the node.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is that? Best root?

user_agent: bytes
timestamp: uint64
)
```

**Response Body:**

```
(
sha: bytes32
user_agent: bytes
timestamp: uint64
)
```

Returns metadata about the remote node.

### Request Beacon Block Roots

**Method ID:** `10`

**Request Body**

```
(
start_slot: uint64
count: uint64
)
```

**Response Body:**

```
# BlockRootSlot
(
block_root: bytes32
slot: uint64
)

(
roots: []BlockRootSlot
mslipper marked this conversation as resolved.
Show resolved Hide resolved
)
```

Requests a list of block roots and slots from the peer. The `count` parameter MUST be less than or equal to `32768`. The slots MUST be returned in ascending slot order.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have some questions regarding the response:

  • Can you skip a slot (e.g., 1, 3, 4)?
  • Can you return less/more than the requested roots?
  • Can you start at higher slot?
  • Can you return a slot outside of the start_slot + count bounds?

Maybe "The slots MUST be returned in ascending slot order." is succinct already? If this is the case we could add something like "The only requirements for roots are ...".

P.S. this is my first comment, thanks for making this doc!


### Beacon Block Headers

**Method ID:** `11`

**Request Body**

```
(
start_root: HashTreeRoot
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need the root? It seems redundant to me, except for the case of chain reorgs which shouldn't happen frequently at sync (and even then, it's probably better to get blocks from the current chain that we'll be able to use later, instead of getting outdated ones).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ẁe need a mechanism for recovering blocks, in case something is lost or the client goes offline for a short bit and loses a few (computer went to sleep / ISP went down for 10 minutes).

I argue in the original issue (#692 (comment)) that it's often natural to request blocks backwards for this reason: the data structure we're syncing is a singly linked list pointing backwards in time and we receive attestations and blocks that let us discover heads "naturally" by listening to the broadcasts. With a block_root+previous_n_blocks kind of request we can both sync and recover, and for example use attestations to discover "viable" heads to work on, from a sync or recovery perspective. Indeed, negotiating finalized epochs in the handshake is somewhat redundant in that case, albeit a nice optimization (except for the chain id) - we could equally well request blocks from the peer that gossiped us the block or attestation whose parent we're missing - they should not be gossiping attestations they have not linked to a finalized epoch of value.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting! To summarize my understanding of your comment: Syncing forwards is safer as we can verify each block immediately when we receive it, but syncing backwards is more efficient/doesn't require additional database indexing (and I guess syncing forwards may require a negotiating phase to discover the best shared block). You're proposing to interpret the fact that I see lots of attestations on top of my sync peer's head flying around the network as evidence that their head is valid? And therefore, I'd be pretty safe syncing backwards?

That sounds reasonable. My original concern was that this requires me to know (at least some good fraction of) the validator set as otherwise my sync peer could create lots of fraudulent attestations for free that I have no chance of verifying. But I would notice this if I have at least one single honest peer (if I try to sync from them or compare the attestations coming from them).

Do you think having only a backwards sync is fine or do we need both (e.g. for highly adversarial environments, or resource constrained devices that don't participate in gossiping?).

Copy link
Contributor

@arnetheduck arnetheduck Mar 18, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

more efficient

In terms of network / bandwidth, I'd say it's about the same but there are some nuances:

  • in forward sync, I can ask for "more" slots than already exist, potentially saving round trips - a client could use this to request "all" data at the time of request arrival. consider the following race: A sends request, a new block is produced, B receives request (similar: B starts sending response which takes time, new block is produced).
  • in backward sync, one could have an (latest-head, known_slot_number) request ("give me the block you consider to be the head, and history back to slot N") to alleviate this race, but then the server selects the head.
  • both above races are generally solved by collecting data from broadcasts while syncing (classic subscribe-then-sync pattern) - they are mainly concerns if you don't want to subscribe or want to delay subscribing.
  • in forward sync, I might end up on a different branch / head than I thought I would - the request itself does not point one out

In terms of client implementations, I think of backward sync as biased to make it cheaper for the server: the server already has the data necessary - also because the head is kept hot - while the client has to keep a chain of "unknown" blocks around / can't validate eagerly. An additional rule that the response must be forward-ordered could help the client apply / validate the blocks eagerly.

The backwards sync can be seen as more passive/reactive/lazy while forward sync is more active..

attestations on top of my sync peer's head flying around the network as evidence that their head is valid

right. the assumption rests on several premises (thanks @djrtwo!):

  • honest clients will not propagate invalid data (not signed by a validator they know)
  • there's a slashing condition on creating unviable attestations - there's currently no penalty to create & sign an unviable block so one can perhaps imagine a malicious group of validators creating lots of these and for example spam everyone during important periods "for free". It sounds a bit far fetched though, tbh, to be creating blocks this way - would love to hear thoughts.
  • I've weak-subjectively selected an initial state that contains some validators. I'd primarily look for anything signed by those validators as another heuristic for where to start syncing (even if the validator set might have changed from there).

Do you think having only a backwards sync is fine or do we need both (e.g. for highly adversarial environments, or resource constrained devices that don't participate in gossiping?).

I'm not sure :) I'm curious to hear feedback on this point, but here are some thoughts:

  • it's important that we have a request like Hello to ask for what clients consider to be the head for non-gossiping use cases - but I think that's orthogonal to the sync direction.
  • clients should be free to send that request at any time, not just during the initial negotiation phase
  • direction can be different for request and response - if different, requires a slightly "smarter" server
  • there's a cost for each direction, in terms of implementation. I'd start with one and look for strong motivations before implementing the other, as the returns are not that great. Either direction is sufficient, really.

Copy link
Contributor

@jrhea jrhea Mar 27, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think having only a backwards sync is fine or do we need both (e.g. for highly adversarial environments, or resource constrained devices that don't participate in gossiping?).

It seems reasonable to sync backwards from the latest received gossiped block (at least as an initial implementation)

in backward sync, one could have an (latest-head, known_slot_number) request ("give me the block you consider to be the head, and history back to slot N") to alleviate this race, but then the server selects the head.

Do we really need start_slot? if we give clients the option to request a block by either start_slot or start_root then that forces us to maintain a lookup or search mechanism for both. if we are saying that both fields (start_slot and start_root) required to sync, then I would disagree. we should be able to simply perform a lookup by block_root and walk the chain backwards until we reach max_headers.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

latest received gossiped block

or even better, latest gossiped attestation

Do we really need start_slot?

I would say that if we go with backwards sync, we should not implement forwards sync here or elsewhere unless there's a strong case for that direction. Having to implement both directions negates some of the benefits of backward sync and adds implementation surface.

It is quite possible to add forward sync in a later version of the protocol as well should it prove necessary.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or even better, latest gossiped attestation

I can dig that

@arnetheduck or anyone else. Why do we need start_slot

start_slot: uint64
max_headers: uint64
skip_slots: uint64
)
```

**Response Body:**

```
(
headers: []BeaconBlockHeader
)
```

Requests beacon block headers from the peer starting from `(start_root, start_slot)`. The response MUST contain no more than `max_headers` headers. `skip_slots` defines the maximum number of slots to skip between blocks. For example, requesting blocks starting at slots `2` a `skip_slots` value of `1` would return the blocks at `[2, 4, 6, 8, 10]`. In cases where a slot is empty for a given slot number, the closest previous block MUST be returned. For example, if slot `4` were empty in the previous example, the returned array would contain `[2, 3, 6, 8, 10]`. If slot three were further empty, the array would contain `[2, 6, 8, 10]` - i.e., duplicate blocks MUST be collapsed. A `skip_slots` value of `0` returns all blocks.

The function of the `skip_slots` parameter helps facilitate light client sync - for example, in [#459](https://github.com/ethereum/eth2.0-specs/issues/459) - and allows clients to balance the peers from whom they request headers. Clients could, for instance, request every 10th block from a set of peers where each per has a different starting block in order to populate block data.

### Beacon Block Bodies

**Method ID:** `12`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just like in LES, I would use different method ids for requests and responses. So it's possible for me to send you proactively blocks and headers using RPC, and you don't need to know about it in advance.


**Request Body:**

```
(
block_roots: []HashTreeRoot
)
```

**Response Body:**

```
(
block_bodies: []BeaconBlockBody
mslipper marked this conversation as resolved.
Show resolved Hide resolved
)
```

Requests the `block_bodies` associated with the provided `block_roots` from the peer. Responses MUST return `block_roots` in the order provided in the request. If the receiver does not have a particular `block_root`, it must return a zero-value `block_body` (i.e., a `block_body` container with all zero fields).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems to me that when everything is going smoothly, block bodies consist of very few attestations (they should be combined by then), and a few minor items like the transfers etc. has anything looked at the numbers to see how much value there is in having separate requests for headers and bodies? Requesting headers then bodies creates additional round-trips which are a cost on its own.


### Beacon Chain State

**Note:** This section is preliminary, pending the definition of the data structures to be transferred over the wire during fast sync operations.

**Method ID:** `13`

**Request Body:**

```
(
hashes: []HashTreeRoot
)
```

**Response Body:** TBD

Requests contain the hashes of Merkle tree nodes that when merkelized yield the block's `state_root`.

The response will contain the values that, when hashed, yield the hashes inside the request body.