The Merkle Proof Standardised Format is now closed for public comments. The working group is currently reviewing the comments received and will decide whether they show cause for returning to the drafting phase, proceeding to be published, or even to withdraw the standard. This page will be amended following their decision.
TSC Submission : Merkle Proof Standardised Format
Attribute  Description 

Version  0.5 
Author  Steve Shadders 
Reviewers  Jerry Chan, Will Devine, Alex Fauvel, Nithin Mani, Di Qiu, Roger Taylor 
Publication Date  25/11/2020 
Risk Categories  n/a 
Copyright  2020, Bitcoin Association 
IP Generation  GB2009798.6 – Merkle Proof Standardized Format 
Known Implementations  Bitcoin SV Node, mAPI v1.2 
Applies To  Miners, Wallets, Service Providers 
BRFC ID  n/a 
Acknowledgements  n/a 
Status  Public Review 
Visibility  PUBLIC 
Merkle proof standardised format
Background
The Merkle proof is fundamental to the Simplified Payment Verification (SPV) model that underpins bitcoin scaling. For SPV wallets to make use of them, they need to be provided to wallets by miners or other actors who download full blocks. As there is likely to be a wide variety of implementations transmitting and receiving Merkle proofs for various reasons, it simplifies implementations if a standard format is adopted.
Problem Statement
Current prior art is embodied in FilteredBlock p2p messages which make use of a bitvector and a list of hashes. This arrangement is unnecessarily confusing, requiring validation of the proof in reverse, and does not provide an easy mechanism to calculate the positional index of a transaction in the Merkle tree. Additionally, this p2p message is likely to be deprecated in favour of other mechanisms due to the inherent scalability problems of using bloom filters for SPV clients.
To determine the validity of the proof, we will provide a description of the following:
 a binary and JSON form of the proposed data structure
 the algorithm.
Methods and Concepts
Composite proofs
It is possible through forward compatible extensions to this data format, to provide multiple proofs in a consolidated form. In the case where two transactions are very close to each other in a block, many of the provided hash nodes are likely to be duplicated. In some cases, it may even be more efficient to provide the base layer of a Merkle subtree with a branch connecting the root of that subtree to the root of the complete tree. It is arguable that a more efficient consolidated format should be provided that enables normalization of this duplicate data.
We note the following points with respect to this design decision:
 The algorithm for validating composite proofs is significantly more complex.
 It is expected that most of the use cases will be single proofs.
 The current expectation is that most transfers will occur in the context of JSON over REST where it is expected that GZIP compression will be enabled by default
 Tests on heavily duplicated proofs have shown negligible differences in sizes after compression for deduplicated composite data structures and structure where multiple simple proofs are included.
GZIP tests
In this example of a Merkle proof for a 1 terabyte block with an expected 4 billion transactions and an approximate tree depth of 32 layers, we show the resultant size for two proofs where 16 layers overlap.
hashes: 32 * 32 bytes hashes as JSONArray, hashesDup: hashes + hashes
original bytes len  hashes: 1024, hashesDup: 2048
raw bytes len  hashes: 2178, hashesDup: 4354 // JSON stringified
gzip bytes len  hashes: 1241, hashesDup: 1274 // JSON stringified then gzipped
As we can see, the difference between the duplicate hashes after GZIP, compared to a single list of nonduplicated hashes, is negligible.
Scope
Simple proofs only
When we take the above into account and note that even in some cases where a small number of proofs are required together that multiple instances of the simple proof are negligibly larger than a single consolidated proof, this specification only addresses the simple case. It does, however, make some design decisions with a view to being able to accommodate a composite proof using the same format with extensions.
Note: The details of the composite proof will be provided as a separate standards document.
Conventions
In this document, we will describe Merkle trees in the context that they are used in Bitcoin, namely, as a tree formed from the list of transaction IDs included in a Bitcoin block, with a Merkle root for that tree included in the block header.
We refer to the layers of Merkle tree from the bottom up, with the layer containing transaction IDs as layer 0.
We index transactions in the tree starting from 0, which represents the coinbase transaction. In the example below, the element is the 5th in the tree with an index of 4.
We designate the nodes c
and p
with a layer index, i.e., c[n]
such that c[0]
is the calculated node in layer 0
.
c = calculated
p = provided in proof
root is both calculated and provided from block header
tx hash in layer 0 is calculated from transaction
4: root
/ \
3: c p
/ \ / \
2: p c o o
/ \ / \ / \ / \
1: o o c p o o o o
/\ /\ /\ /\ /\ /\ /\ /\
0: o o o o c p o o o o o o o o o o
^

tx
Execution algorithm
The outcome of the algorithm is proof that a given transaction ID is included in a given block specified by its block header. We do not address the validity of the block header itself in this standard, only whether the proof of inclusion is valid for that header.
By convention we use:
n
to designate the layer of the tree beginning atn == 0
for the bottom layer containing transaction IDs.i
to designate the index of a node in its layer such that the first element in layern
is ati == 0
.
Beginning with the candidate transaction:
 Calculate the transaction ID:
c[0] = sha256d(transaction)
.  If no pair is provided, we assume that
c[0]
is the Merkle root and compare it to the root provided in the block header.  If a pair (
p[0]
) is provided, we must concatenate and hashc[0]
andp[0]
together to obtain the parent node (c[1]
) in the next layer up. The order is determined by the positional index of
c[0]
.  If
indexOf(c[0]) mod 2 == 0
it is an even indexed node in the layer and is a left hand node. Thus:
c[1] = sha256d(concat(c[0], p[0]))
.
 Thus:
 Otherwise it is a right hand node.
 Thus:
c[1] = sha256d(concat(p[0], c[0]))
.
 Thus:
 The order is determined by the positional index of
 Continue this process for each layer (
n
) of the Merkle tree: If
indexOf(c[n]) mod 2 == 0
it is an even indexed node in the layer and is a left hand node. Thus:
c[n + 1] = sha256d(concat(c[n], p[n]))
.
 Thus:
 Otherwise it is a right hand node.
 Thus:
c[n + 1] = sha256d(concat(p[n], c[n])
).
 Thus:
 If
 Upon reaching the final layer, we have calculated the Merkle root and compared it to ensure that it is equal to the provided Merkle root, usually obtained from a Bitcoin block header.
 If the calculated Merkle root matches the expected Merkle root, we have proven that the transaction ID
c[0]
is included in the Merkle tree designated by the provided Merkle root. By extension, if the Merkle root is contained in a block header with a valid proof of work for a chain of blocks we accept as the longest, then we have proven that the transaction is included in that block chain.
Note that in some cases, the provided pair will be identical to the calculated pair. This is expected behaviour when the transaction exists near the right hand edge of the Merkle tree and a corresponding right hand element does not exist. However, if the elements are equal, it should be confirmed that the calculated element c[n]
is a left hand node, i.e., indexOf(c[n]) mod 2 == 0
.
The alternative condition cannot happen in a Bitcoin Merkle tree. This check guards against the case where the node is near the right hand edge of an incomplete tree where the Merkle proof for a left and right hand node can be the same. This check ensures that the proof fails if the index is not correctly given.
Depth attacks
We note that step 1
is a vital part of the proof process. Without validating that the SHA256 hash is actually derived from a bitcoin transaction, it is possible to send a partial Merkle proof for any node higher up in the tree that will validate correctly.
Required data
We can see from the above description that the following data is required to calculate the Merkle proof:
 The transaction that is being proven as included.
 The Merkle root of the tree that it is included in, such that it can be compared to the calculated root.
 A list of pairs
[p[0], ... , p[n]]
to the calculated nodes.  Whether each pair is a left or right hand node.
 Note that this can be be explicitly provided for each pair, or calculated from the index of the transaction ID in the Merkle tree.
Extension to proof of positional index
We note that if providing the positional index of c[0]
is the mechanism used to determine the left/right ordering that if the proof is successful it will also hold as proof of the positional index in the tree with the following additional check, as long as this check never fails:
if (indexOf(c[n]) mod 2 == 1) then c[n] != p[n]
Then we can assume that the provided index is correct.
Extension to proof of last element
Extending with an additional check, we can also confirm if c[0]
is the last element of the Merkle tree by exploiting the fact that where no right hand pair exists, a copy of the left hand pair is used in its place. The following check will always hold true if the element is last in the tree but will always fail at least once if it is not:
if (indexOf(c[n]) mod 2 == 0) then c[n] == p[n]
;
This effectively serves as a proof of the number of transactions in the block where indexOf(c[0]) == len(layer[0])
Merkle proof data structure
The proposed structure itself generally consists of a flags byte, followed by the positional index of the transaction in the Merkle tree, and a list of 32 bytes hashes. Optionally, we use special encoding for hashes where they are a copy of the matching calculated node. It begins at the bottom of the tree and each additional element is in the next layer up, representing the counterpart pair to the parent of the previous layer. Finally, we reach the root of the Merkle tree.
The Merkle proof data can be structured in Binary or in JSON formats shown below:
Binary
 varint is defined as a Bitcoin varint which is compact for cases where values in the range of 0 – 253 are commonly expected.
type: byte, // 0 == hash (byte[32]), 1 == duplicate (byte[0]), 2 == index (varint)
value: byte[32] or varint or byte[0] //determined by type
Note: Type 2 is reserved for use in an extension format. The duplicate type could have been omitted, allowing us to eliminate the type byte entirely from this standard. However, its inclusion does have some potential space benefits whilst making the proposed extension standard fully backward compatible.
flags: byte,
index: varint,
txLength: varint, //omitted if flag bit 0 == 0 as it's a fixed length transaction ID
txOrId: byte[32 or variable length],
target: byte[32 or 80], //determined by flag bits 1 and 2
nodeCount: varint,
nodes: node[]
Nodes and duplicated indexes
The standard specifies that hashes and potentially block headers are encoded as hex strings. We use the special string "*"
as a marker to indicate that hash is a duplicated node where p[n]
hash should be assumed to be a copy of c[n]
. We choose this special marker string rather than null for the following reasons:
 The encoding is shorter than encoding
null
but slightly longer than encoding an empty string.  To avoid ambiguity in error cases, it is common for software bugs to produce unexpected
null
or empty string results. By makingnull
an unexpected result, these errors are easier to detect.
Fields
Flags
There are several choices to be made in a Merkle proof format.
 Transaction of TxId: Whether to include the original transaction given that in most cases it will already be known or just the transaction ID.
 Target type: Whether the final element should be the Merkle root, the entire block header, the block hash of the block it contains, or omitted completely. In some cases, the Merkle proof may be provided in the context of an envelope data structure (e.g. an SPV envelope) that already contains the missing elements.
 Proof type: Whether it is a branch or a tree. The tree option is only used in an extension to this standard but the flag bit is reserved for that purpose.
 Composite proof: Whether this proof is part of a set of proofs and contains references to other proofs in the set. This option is only used in an extension to this standard but the flag bit is reserved for that purpose.
The following bits are reserved for each purpose, counted from the right most (least significant) bit. Although some of the following combinations may not have real use cases, for future proofing, these values are reserved.
Transaction or TxId (bit 0)
Corresponds to: (flags & 0x01)
Where value equals:
0: The `txOrId` field contains a transaction ID
1: The `txOrId` field contains a full transaction
Target type (bits 1 and 2):
Corresponds to: (flags & (0x04  0x02))
Where value equals:
0: The `target` field contains a block hash
2: The `target` field contains a block header
4: the `target` field contains a merkle root
6: not valid
Proof type (bit 3):
Corresponds to: (flags & 0x08)
Where value equals:
0: Merkle branch (or subbranch in the extension format)
1: Merkle tree (or subtree in the extension format)
Composite proof (bit 4):
Corresponds to: (flags & 0x10)
Where value equals:
0: Single proof
1: Composite proof
Defaults
Note that default values have been selected such that a flag value of 0 means:
 The
txOrId
field contains a transaction ID.  The
target
field contains a block hash.  The proof is a Merkle branch.
 The proof is a single proof that an be validated using the simplest algorithm.
JSON
For JSON, we propose simply breaking the elements up into an array of hex values.
// simple version where only a single path is specified
{
// mandatory fields:
"index": 4,
"txOrId": "bytes", // hex bytes, 32 if ID, otherwise interpret as full transaction
"target": "merkleroot_or_blockhash_or_blockheader", // which one is specified by targetType
// contains the provided pair nodes to the calculated nodes in the proof.
// "hash" is a simple hash node
// integer is an index to a subtree or subpath  only used in extension
// "*" is a copy of the calculated node e.g. parent = hash (left  left)
"nodes": ["hash", "hash", "*", ... "hash"],
// optional fields:
// can be one of: 'blockHash' or 'blockHeader' or 'merkleRoot'
"targetType": "blockHeader", // defaults to 'blockHash' if not specified
// can be one of: 'branch' or 'tree'
"proofType": "branch", // defaults to 'branch' if not specified
// can be one of: true or false
"composite": false // defaults to false if not specified
}
JSON note
In the JSON form of this data structure, the flag byte is omitted for simplicity and uses type fields instead for types than cannot be inferred, such as the targetType or the proofType.
Index and left/right determination
A complete Merkle proof requires not only the transaction you are proving, but also the transaction ID of its pair. If the transaction index in layer 0 of the tree is i
then where i mod 2 == 0
the pair will be at index i + 1
. Where i mod 2 == 1
the pair will be at index i  1
. This ordering is required to be maintained when hashing the left/right nodes to calculate the parent. It could optionally also be provided as a flag for each provided node in the tree.
We have chosen to use the positional index rather than left/right flags for the following reasons:
 It simplifies the data structure to include only a single integer.
 The index in block may be useful in some cases.
 The index in block can be proven as part of the Merkle proof.
 Calculation of the leftness of rightness of a node along with the indexes of parent nodes during proof execution is trivially simple.
 Whilst it is possible to calculate the positional index using left right flags, the calculation is more complex and even in the best cases (assuming 1TB blocks) requires just as much data to be included as is required for the positional index.
Nodes
The nodes
field is an array data structure that contains all of the nodes that must be provided as pairs to the calculated nodes. It does NOT include the transaction ID being proven (assumed to be calculated from the transaction) OR the Merkle root (calculated as part of the proof).
Both the binary and the JSON formats specify a mechanism for designating elements of the path that are duplicates, i.e., where the provided element p[n]
should be assumed to be a copy of of the calculated element c[n]
for that layer of the tree.
Merkle proof validation
In the following example, we will validate a simple Merkle branch type proof. A complete functioning version of this example code is available here: https://github.com/bitcoinsvspecs/merkleproofstandardexample.
const bsv = require('bsv')
const { swapEndianness } = require('bufferswapendianness')
function VerifyMerkleProof (merkleProof) {
// flags:
let txid
if (merkleProof.txOrId.length === 64) {
// The `txOrId` field contains a transaction ID
txid = merkleProof.txOrId
} else if (merkleProof.txOrId.length > 64) {
// The `txOrId` field contains a full transaction
const tx = new bsv.Transaction(merkleProof.txOrId)
txid = tx.id
} else {
throw new Error('invalid txOrId length  must be at least 64 chars (32 bytes)')
}
let merkleRoot
if (!merkleProof.targetType  merkleProof.targetType === 'blockHash') {
// The `target` field contains a block hash
if (merkleProof.target.length !== 64) {
throw new Error('invalid target field')
}
// You will need to get the block header corresponding
// to this block hash in order to get the merkle root
// from it. You can get this from from the headers
// store of an SPV client or from a third party
// provider like WhatsOnChain
const header = mapHashToHeader[merkleProof.target]
if (!header) {
throw new Error('block hash map to header not found in `mapHashToHeader`')
}
merkleRoot = extractMerkleRootFromBlockHeader(header)
} else if (merkleProof.targetType === 'blockHeader' && merkleProof.target.length === 160) {
// The `target` field contains a block header
merkleRoot = extractMerkleRootFromBlockHeader(merkleProof.target)
} else if (merkleProof.targetType === 'merkleRoot' && merkleProof.target.length === 64) {
// the `target` field contains a merkle root
merkleRoot = merkleProof.target
} else {
throw new Error('invalid targetType or target field')
}
if (merkleProof.proofType && merkleProof.proofType !== 'branch') {
throw new Error('only merkle branch supported in this version') // merkle tree proof type not supported
}
if (merkleProof.composite === true) { // OR if (merkleProof.composite && merkleProof.composite !== false)
throw new Error('only single proof supported in this version') // composite proof type not supported
}
if (!txid) {
throw new Error('txid missing')
}
if (!merkleRoot) {
throw new Error('merkleRoot missing')
}
const nodes = merkleProof.nodes // different nodes used in the merkle proof
let index = merkleProof.index // index of node in current layer (will be changed on every iteration)
let c = txid // first calculated node is the txid of the tx to prove
let isLastInTree = true
nodes.forEach(p => {
// Check if the node is the left or the right child
const cIsLeft = index % 2 === 0
// Check for duplicate hash  this happens if the node (p) is
// the last element of an uneven merkle tree layer
if (p === '*') {
if (!cIsLeft) { // this shouldn't happen...
return { isValidIndex: false, proofValid: false }
}
p = c
}
// This check fails at least once if it's not the last element
if (cIsLeft && c !== p) {
isLastInTree = false
}
// Calculate the parent node
if (cIsLeft) {
// Concatenate left leaf (c) with right leaf (p)
c = getMerkleTreeParent(c, p)
} else {
// Concatenate left leaf (p) with right leaf (c)
c = getMerkleTreeParent(p, c)
}
// We need integer division here with remainder dropped.
// Javascript does floating point math by default so we
// need to use Math.floor to drop the fraction.
// In most languages we would use: i = i / 2;
index = Math.floor(index / 2)
})
// c is now the calculated merkle root
return {
proofValid: c === merkleRoot,
isLastInTree
}
}
function getMerkleTreeParent (leftNode, rightNode) {
// swap endianness before concatenating
const leftConc = swapEndianness(Buffer.from(leftNode, 'hex'))
const rightConc = swapEndianness(Buffer.from(rightNode, 'hex'))
// concatenate leaves
const concat = Buffer.concat([leftConc, rightConc])
// hash the concatenation
const hash = bsv.crypto.Hash.sha256sha256(concat)
// swap endianness at the end and convert to hex string
return swapEndianness(Buffer.from(hash, 'hex')).toString('hex')
}
function extractMerkleRootFromBlockHeader (blockHeader) {
const blockHeaderBytes = Buffer.from(blockHeader, 'hex')
// extract the 32 bytes that come after the version (4 bytes)
// and the previous block hash (32 bytes).
// https://en.bitcoin.it/wiki/Block_hashing_algorithm
return swapEndianness(blockHeaderBytes.slice(36, 68)).toString('hex')
}
History
Artefact  Description 

Errata  n/a 
Change Log  n/a 
Decision Log  n/a 
Relationships
Relationship  Description 

IP licences and dependencies  Licensing terms to be announced 
Previous versions  n/a 
Extends (existing standards extended by this standard)  n/a 
Modifies (existing standards changed by this standard)  n/a 
Deprecates (existing standards obsoleted by this standard)  n/a 
Depends On (existing standards that are foundational or prerequisite to this standard)  n/a 
Prior Art  n/a 
Existing Solution  n/a 
References  n/a 
Supplementary Material
No supplementary material
This Standard is at the public review stage and comments are closed.