Vox Metrics — DAO-controlled fair value scoring (Roadmap update for Polkassembly)
Vox Metrics is now live with an open API, a dashboard, and universal weights that apply fairly across all supported networks. All scoring parameters are DAO-controlled: the community can preview proposed changes, see precisely how scores would shift, and then adopt them by updating the on-chain settings pointer (or controls.json
in the reference build). The aim is a transparent, on-chain alternative to opaque, centralised macroeconomic yardsticks.
What this is (for new and returning readers)
Vox Metrics produces a single Value/Trust Score (VTS) in [0,1] for each network, built from five normalised pillars:
- PoV — Protocol of Value (throughput & resilience)
Sub-metrics: transactions, validator/effective set, uptime
- OPI — On-chain Participation & Integrity (governance health)
Sub-metrics: voter turnout, stake ratio, proposal cadence
- DLI — DeFi Liquidity (depth & efficiency)
Sub-metrics: TVL, DEX volume, slippage (inverted)
- PTI — Policy/Transparency & Security (project hygiene)
Sub-metrics: transparency, security, sentiment
- RWAI — Real-World Assets Index (tokenised RWA activity)
Sub-metrics: value locked, trades, diversity
All subscores are mapped to [0,1] on published scales, then rolled up with weights. The final VTS is reproducible, comes with a hash of inputs, and includes the controls version used.
Why DAO-controlled?
- Fairness by design: one set of weights applied to every network in strict policy, so no chain is advantaged by bespoke tuning.
- Public, predictable governance: proposed changes are previewed (no side-effects), discussed, then adopted by vote.
- No black boxes: the full recipe (weights, ranges, formulas, sources) is visible; anyone can reproduce results.
- Auditability: each response returns a reproducibility hash and the exact controls version used.
- Neutrality: parameters are not set by a single company or committee; they are set by token-holders in the open.
Current state: the reference implementation reads from controls.json
. The next governance step is to point the service at a DAO-managed location (e.g. IPFS/Arweave/Git tag) so an approved proposal updates a single pointer that all deployments can follow.
Why move away from centralised macroeconomic data?
Centralised macro data (CPI, GDP prints, central bank guidance) has three practical problems for crypto networks:
- Opacity and lag: releases are scheduled, revised, and politically framed. They are slow, get revised after the fact, and the method is rarely transparent enough to audit line-by-line.
- Jurisdiction bias: macro signals reflect specific economies and policy choices. They are not neutral for borderless, permissionless networks.
- Single-source risk: markets can be whipsawed by one publication or a handful of institutions. That’s fragile.
Vox Metrics is built to be on-chain, transparent, and always-on:
- Data are drawn from openly queryable sources (explorers/indexers/DEX APIs).
- Normalisation and weighting are public and reproducible.
- Method is portable across networks; comparisons are like-for-like.
- Governance is collective and inspectable, not managerial.
This is not a macro substitute for national economies; it is a fair, network-native compass for crypto systems.
What’s live today
- API & docs:
/docs
(OpenAPI).
- Dashboard:
/dashboard
with a network selector, compare networks, all networks table, and a controls sandbox.
- Universal weights: default policy is strict (same weights for every chain).
- DAO-tunable sub-metrics: sub-weights per pillar can be adjusted by governance.
- Major chains supported: Polkadot plus Ethereum, Polygon, BSC, Arbitrum, Optimism, Base, Avalanche, Fantom.
- Self-test & diagnostics: range checks, normalisation checks, and reproducibility validation.
- Reproducibility hash: every response includes a deterministic
hashOfInputs
.
Roadmap status
Phase 1 — Foundation ✅
Scoring engine, normalisation, hash-of-inputs, dashboard, strict universal weights.
Phase 2 — Multi-chain (first wave) ✅
Polkadot + major EVMs with consistent scaling.
Phase 3 — Governance tooling ▶ In progress
- Preview pathway done (no write).
- Next: DAO-managed controls pointer (IPFS/Arweave/Git).
- Optional: guarded “apply controls” endpoint for authorised deployments.
Phase 4 — Data breadth & resilience ▶ In progress
More sources per chain, caching, graceful fallbacks.
Phase 5 — Polishing ▶ Next
CSV export, policy badges in UI (strict/adaptive), light anomaly alerts.
How governance changes the score (in practice)
- Propose new weights/ranges in the thread.
- Preview the impact (sandbox or
POST /api/v1/controls/preview
) to produce before/after tables.
- Discuss and refine.
- Adopt via vote; update the DAO pointer to the new controls version.
- Verify: responses show the new
controlsVersion
and a fresh hashOfInputs
.
By default, we recommend staying in strict policy (universal weights). An adaptive policy is available if the DAO agrees to rule-based, small shifts (e.g. temporarily down-weight RWAI where it is still maturing).
Useful endpoints:
GET /api/v1/networks
— list available networks
GET /api/v1/{network}/metrics/all
— full breakdown for one network
GET /api/v1/metrics/all_networks
— table across all networks
GET /api/v1/weights
— current weights (read-only)
GET /api/v1/selftest
— system check
POST /api/v1/controls/preview
— try new weights without saving
Call to action
- Review the scales and weights. If you believe PoV should emphasise validator diversity more than raw throughput, propose new sub-weights.
- Suggest sources. If there’s a reliable indexer or DEX dataset we should add for any chain, please share.
- Bring a proposal. Use the preview endpoint to produce a clear before/after table and submit it for a vote.
- Hold us to account. If anything is unclear or not reproducible, say so. This system only earns trust by being legible.
Our goal is simple: a fair, open, and genuinely useful way to compare networks that doesn’t depend on central banks, national statistics, or opaque rating shops. If that sounds right to you, help us tune it.

