Github: https://github.com/patractlabs/elara
Homepage: https://elara.patract.io
Elara's goal is to build an infrastructure and network public access service similar to Infura for Polkadot ecological developers. This project will be connected to the Polkadot and Kusama relay chains, the parachains, and the independent chains based on Substrate. Developers don't need to run chain nodes by themselves, but can easily and quickly access the entire Polkadot ecosystem through this project. We have completed the development of Treasury Proposal #33 for v0.2 and providing the Polkadot node service. We have also written a detailed development report #192 to explain our architecture. We have supported developers to create and manage projects on Elara, access the Polkadot relay chain with Http and Websocket protocols, and view project-related monitoring statistics.
Elara has long-term development iteration plan and maintenance goals, in order to be able to support the continuous transition of Polkadot ecological developer requests from 1M+ to 1B+ levels. The core of Elara is a distributed architecture with high performance, high availability and high scalability. It is significantly different from other projects that provide similar Node API Services:
Elara's backend architecture is not a simple but inefficient NodePool+LoadBalancer design, which cannot bear the impact of large-scale applications and traffic, and cannot be expanded in time, because Node will become a significant bottleneck in the system. In Elara's architecture, only a small number of Nodes are required to provide basic data sources. Elara supports the access of a large number of users through the design of distributed architecture, the integration of multiple services, and the ultimate optimization of request access paths. The short-term development cost of the NodePool design is low, but the long-term unit cost for servicing users is extremely high. In turn, Elara’s initial cost in technology is relatively high, but in the long-term it can greatly reduce the unit cost of serving users, while also ensuring high service quality.
Elara focuses on providing a simple and full-featured experience for ecological developers. What we provide is an extremely "thin" layer of service, and we hope that developers will not feel the existence of Elara. The API service provided by Elara not only includes all the functions of the node, but also includes the historical status data function, but also includes the project request statistics dashboard and other functions. The service capabilities provided to developers are far greater than the capabilities provided by the deployment of nodes. Therefore, under this goal, functions such as "one-click node deployment" are very redundant, because developers should only focus on their own business applications.
Elara is an open platform, upholding the principle of community building. In the next iterations of the version, we will continue to access major mainnets, and will establish a set of process specifications for the automatic access of the Polkadot ecological mainnets and parachains.
The current official https://polkadot.js.org/apps/ already supports multiple mainnets. Elara needs to upgrade the current architecture to achieve extensive support for multi-chain nodes and configuration expansion access. We will contact the project owners one by one to discuss and access the currently active 9 chains: Kusama, Mandala, Darwinia, Dock, Edgeware, Kulupu, Nodle, Plasm, Stafi.
For developers who are not familiar with the Substrate RPC interface, they will still be confused about the use of the specific interface and have nowhere to ask for help, especially as part of the mainnet currently does not have a complete RPC interface document. Therefore, Elara urgently needs to provide instructions and simple examples for each mainnet RPC interface.
The back-end system of Elara v0.2 is divided into three independently scalable microservices:
Developer-Account
: Manage the user's account login status module.Stat
: The management and data statistics module of Project
.API
: The Route
module for managing user Requests
.The function of the API
service is to receive the request, forward the request to the corresponding blockchain node on the backend, and then transfer it back to the client after receiving the response. This service is the request entry directly facing DApp, and it is the part with the largest amount of requests and the greatest access pressure in the entire system architecture. Therefore, optimizing the performance of the API
service and improving the response speed are the focus of architecture optimization.
After analyzing the interface of the Substrate RPC module, we can know that the RPC interface is divided into Websocket interface and Http interface. For Http interface, it can be divided into cacheable interface and uncacheable interface, such as the finalized_head
interface:
#[rpc(name = "chain_getFinalizedHead", alias("chain_getFinalisedHead"))]
fn finalized_head(&self) -> Result<Hash>;
finalized_head
returns the latest finalized block, and the return value corresponding to this interface is fixed within a block period. At the same time, for the KV storage at the bottom of the node, the key
corresponding to this interface is also fixed. There are also interfaces similar to the more commonly used chain_getHeader
and chain_getBlock
. If no specific parameters are specified, the information corresponding to the current latest best
block is taken by default. The latest best
block is also fixed within a block period.
#[rpc(name = "chain_getHeader")]
fn header(&self, hash: Option<Hash>) -> FutureResult<Option<Header>>;
#[rpc(name = "chain_getBlock")]
fn block(&self, hash: Option<Hash>) -> FutureResult<Option<SignedBlock>>;
Therefore, our architecture can add a system cache layer to the most commonly used access paths based on this. During the cache cycle, cache a specific key-value, and the request that hits the cache will directly get the data response from the cache layer to the client without having to visit the back-end node. This will significantly reduce the request pressure on the back-end nodes, and at the same time enable rapid response to requests, and the Throughput capacity can be increased exponentially.
Based on the above, we optimized the system architecture as shown in the figure below. API
filters and transforms the request. For interface requests that can be converted to a fixed key, the cache will be accessed first. If the cache is hit, it will be returned directly. If it is not hit, the request will be made to the node.
Nodes usually run in normal mode and only save the world state information in a certain interval. This can save running costs and fulfill most of the RPC requests, but it cannot fulfill the access to the historical state. Elara hopes to fulfill all types of RPC access, including access to a certain long-distance historical state. For example, the storage
interface needs to access the historical storage of the state under a specified block.
#[rpc(name = "state_getStorage", alias("state_getStorageAt"))]
fn storage(&self, key: StorageKey, hash: Option<Hash>) -> FutureResult<Option<StorageData>>;
This requires that Elara's back-end chain nodes must run in the archive
mode. archive
saves the full historical state of the node during its operation. In this mode, all historical data access in all RPC requests can be fulfilled. However, this mode requires very high node operating costs, and the response to reading historical data is relatively slow. Therefore, on the one hand, we must provide historical data access capabilities, and on the other hand, we must reduce data redundancy, lighten back-end nodes, reduce storage pressure, and shorten the response time for accessing historical data. Therefore, we made further optimization changes to the architecture to optimize the historical state and the access path of the storage key:
In this architecture, we will introduce the substrate-archive
component and the PostgreSQL
database, and carry out the secondary development of the substrate-archive
component to make its functions more complete and support node data rollback, Key-Value
Data change notification and other features. The substrate-archive
component will scan and parse the full amount of Key-Value
data on nodes in archive
mode, and write it into the Key-Value
data table of PostgreSQL
synchronously for API service query access.
Develop program structure expansion, support multi-chain architecture
Implement Archive
Data service
Implement the system cache layer
Cloud Cache
storage serviceDocument development and front-end optimization of Elara official website