One of the greatest headaches of a blockchain developer is how to keep fees as low as possible, without sacrificing security or speed.
At Enjin, we view such obstacles as an opportunity to innovate, push the limits, and adapt our solutions to the evolving needs of our users.
In this article, I’ll describe one of the core optimizations we’ve applied to Efinity, which will vastly improve the experience of our game adopters and users:
- Minting a massive amount of non-fungible tokens (NFTs)
- Batch transfers of a massive amount of NFTs
Don’t miss the final benchmarks at the end of the article!
Optimizing NFT Transactions
The Issue: I/O Throughput
Efinity is developed using Substrate, and will be deployed as a parachain (on Polkadot). In this ecosystem, access to the storage (reading or writing the state of the blockchain) is critical when you benchmark the transaction (extrinsic) of your runtime.
The goal here is to reduce the number of I/O operations on the storage as much as possible, which will mean an immediate reduction in fees for the user.
In order to store the balance of NFT in an account, we could use the following structure:
In this way, we can store/query the balance of the given token belonging to the specified asset for the target account. In Substrate, we could also iterate over the storage and enumerate the tokens owned by an account with or without fixing the asset.
However, the big issue of this representation is the number of I/O operations needed by massive minting and batch transfers. For instance, if we want to create 1,000,000 tokens for a new game, then it will require at least 1,000,000 writes into the storage.
In the same way, batch transfers would not be optimized and they would take one read plus two writes (one in the source account, and another in target) per single transfer.
The Solution: Chunks of Tokens
One way to reduce the I/O on the storage is by grouping things. In this case, we are going to put a group of tokens into a single structure: the chunk.
A chunk is a group of sequential tokens that share an index.
For instance, let’s assume we defined the size of the chunk to 512 elements, and the chunk index is the result of the division of the token ID by 512 (the chunk size).
If we follow the previous example, we have reduced the I/O from 1,000,000 to 1,954.
One Step Further: Ranges
Now that we’ve reduced the number of I/O, let’s try to reduce the fees and space dedicated to storing our token IDs. We are going to take the advance of the sequential token IDs to make a compression of chunks.
A range is an open-ended range of token IDs, e.g., [0,512) representing a chunk of tokens 0,1,2,…,511. Instead of writing all token IDs inside the chunk, we will write only ranges.
The best case is when the chunk is full, where we only need two IDs to define it. For instance, a chunk with the first 10 tokens like [0,1,2,3,4,5,6,7,8,9] will be compressed into a range [0,10). The ‘uncompressed’ version of the chunk uses 10 integers, while the compressed version only requires 2 integers.
The worst case is when a chunk contains only odd or even token IDs, in which we will need 512 IDs to represent ranges for 256 tokens. For instance, if a chunk contains non-sequential elements like [0, 2, 4, 6], then its compressed range representation will require more space, like { [0,1), [2,3), [4,5), [6,7) }.
Using ranges will increase the complexity of some operations like subtraction and addition (which are used for transfers between accounts), but that new complexity will be an order of magnitude less than I/O operations.
Performance: Sounds good, but let me see the figures.
The most important rule: any improvement MUST be supported by figures on a benchmark.
The following table shows the transactions affected by this optimization. The rest of the extrinsics of the pallet were omitted because they were not affected in performance terms:
Some important things to highlight:
- NFT Minting sees an impressive improvement of 99.8%. In that first draft, I was able to mint 120,000,000 NFTs in one single block.
- Batching NFT transfers only saw x2, but the deviation of the standard error shows that we could get better figures on specific use cases.
- A new optimized chunked NFT transfer. Any smart wallet could take advantage of the underlying optimization through new API functions. Transferring up to 512 tokens on the same chunk will cost the same as one single transfer.
- The degradation on single transfers is around 8% at the initial draft. We still need to add some extra operations inside that, so this number will decrease in future versions.
The source code will be opened soon.
Conclusions
Efinity will democratize NFTs using micro-fees, and will be a game-changer for developers and businesses who require better performance on massive operations.
This kind of optimization can be used in other areas, and those figures are still on L1 where security and liquidity are kept at high levels.