Bitget App
Trade smarter
Buy cryptoMarketsTradeFuturesBotsEarnCopy
LazAI Research: How the AI Economy Goes Beyond the DeFi TVL Myth

LazAI Research: How the AI Economy Goes Beyond the DeFi TVL Myth

BlockBeatsBlockBeats2025/05/14 11:05
By:BlockBeats

1. DeFi has driven the blockchain network towards a global permissionless market disruption of traditional finance through a series of simple yet powerful economic primitives such as Total Value Locked (TVL), Annual Percentage Yield (APY/APR), and liquidity. 2. The AI field currently lacks foundational primitives similar to DeFi to measure data quality, model performance, agent reliability, and alignment incentives, limiting the development of the AI economy. 3. The LazAI project, by introducing Data Anc

Original Title: "Defining New Primitives in an AI-Native Economy"
LazAI Research: How the AI Economy Goes Beyond the DeFi TVL Myth image 0


Introduction


Decentralized Finance (DeFi) has ignited an exponential growth story by transforming blockchain networks into global permissionless markets through a set of simple yet powerful economic primitives, thoroughly disrupting traditional finance. In the rise of DeFi, several key metrics have become a universal language of value: Total Value Locked (TVL), Annual Percentage Yield (APY/APR), and liquidity. These concise metrics have sparked participation and trust. For example, in 2020, DeFi's TVL (the dollar value of assets locked in protocols) surged 14x, followed by another 4x in 2021, peaking at over $112 billion. High yields (with some platforms claiming APY as high as 3000% during the liquidity mining craze) attracted liquidity, with the depth of liquidity pools signaling lower slippage and a more efficient market. In essence, TVL tells us "how much capital is involved," APR tells us "how much yield can be earned," and liquidity indicates "the ease of asset exchange." Despite its flaws, these metrics built a billion-dollar value DeFi ecosystem from scratch. By turning user participation into direct financial opportunity, DeFi has created a self-reinforcing adoption flywheel, rapidly driving widespread involvement.


Today, AI finds itself at a similar crossroads. However, unlike DeFi, the current AI narrative is dominated by large-scale general models trained on massive internet datasets. These models often struggle to deliver effective results in niche domains, specialized tasks, or personalized needs. Their "one-size-fits-all" approach, while powerful, is brittle and misaligned. This paradigm is in dire need of a shift. The next era of AI should not be defined by model scale or universality but should focus on a bottom-up approach—smaller, highly specialized models. Such tailored AIs require a new kind of data: high-quality, human-aligned, and domain-specific data. However, acquiring such data is not as straightforward as web scraping; it requires active and conscious contributions from individuals, domain experts, and the community.


To drive this specialized, human-aligned AI era, we need to build an incentive flywheel designed similarly to how DeFi incentivizes. This means introducing new AI-native primitives to measure data quality, model performance, agent reliability, and alignment incentives—metrics that should directly reflect the true value of data as an asset (rather than just an input).


This article will explore these new primitives that can form the pillars of an AI-native economy. We will discuss how, with the right economic infrastructure in place (i.e., generating high-quality data, incentivizing its creation and use fairly, and putting the individual at the center), AI will thrive. We will also use platforms like LazAI as examples to analyze how they have pioneered the construction of these AI-native frameworks, leading the way in a new paradigm of pricing and rewarding data to power the next leap in AI innovation.


The DeFi Incentive Flywheel: TVL, APY, and Liquidity - A Quick Review


The rise of DeFi is not coincidental; its design makes participation both lucrative and transparent. Total Value Locked (TVL), Annual Percentage Yield (APY/APR), and liquidity are key metrics that are not just numbers but rather primitives aligning user behavior with network growth. These metrics together form a virtuous cycle that attracts users and capital, driving further innovation.


Total Value Locked (TVL): TVL measures the total capital deposited into DeFi protocols (such as lending pools, liquidity pools), becoming the "market cap" equivalent for DeFi projects. The rapid growth of TVL is seen as a sign of user trust and protocol health. For example, during the 2020-2021 DeFi boom, TVL surged from under $100 billion to over $1 trillion, surpassing $1.5 trillion by 2023, showcasing the scale of value participants were willing to lock into decentralized applications. High TVL creates a gravitational pull: more capital means higher liquidity and stability, attracting more users seeking opportunities. Although critics point out that blindly chasing TVL may lead to unsustainable incentives (essentially "buying" TVL), masking inefficiencies, without TVL, the early DeFi narrative would lack a concrete way to track adoption.


Annual Percentage Yield (APY/APR): Yield promises transform participation into tangible opportunities. DeFi protocols started offering astonishing APY to liquidity or fund providers. For example, Compound introduced the COMP token mid-2020, pioneering the liquidity mining model - rewarding liquidity providers with governance tokens. This innovation sparked a frenzy of activity. Using the platform was no longer just a service but an investment. High APY attracts yield seekers, further driving up TVL. This reward mechanism incentivizes early adopters with generous returns, fueling network growth.


Liquidity: In finance, liquidity refers to the ability to move assets without causing significant price fluctuations - the cornerstone of a healthy market. Liquidity in DeFi is often kickstarted through liquidity mining schemes (users earn tokens for providing liquidity). Deep liquidity in decentralized exchanges and lending pools means users can trade or borrow with low friction, improving the user experience. High liquidity brings higher trading volumes and utility, attracting more liquidity - a classic positive feedback loop. It also supports composability: developers can build new products (derivatives, aggregators, etc.) on top of a liquid market, driving innovation. Thus, liquidity becomes the lifeblood of the network, propelling adoption and the emergence of new services.


These primitives together form a powerful incentive flywheel. Participants who create value by locking assets or providing liquidity immediately receive rewards (through high yields and token incentives), encouraging more participation. This transforms individual involvement into widespread opportunities - users earn profits and governance influence - which in turn fuel network effects, attracting thousands of users to join. The results are remarkable: by 2024, the DeFi user count has exceeded 10 million, with its value growing nearly 30x in just a few years. Clearly, massive incentive alignment - converting users into stakeholders - is a key driver of DeFi's exponential rise.


The Current Missing Piece in the AI Economy


If DeFi has shown how bottom-up participation and incentive alignment can kickstart a financial revolution, today's AI economy still lacks the foundational primitives to support a similar transformation. The current AI is dominated by large-scale, general-purpose models trained on massive scraped datasets. These foundational models are massive in scale but aimed to solve all problems, often failing to effectively serve anyone in particular. Their "one-size-fits-all" architecture struggles to adapt to niche domains, cultural differences, or individual preferences, leading to brittle outputs, blind spots, and an increasing misalignment with real-world needs.


The definition of the next-generation AI will no longer be just about scale but will also involve contextual understanding — the ability for models to understand and serve specific domains, professional communities, and diverse human perspectives. However, this contextual intelligence necessitates different inputs: high-quality, human-aligned data. And this is exactly what is currently missing. There is currently no widely accepted mechanism to measure, identify, value, or optimize such data, nor an open process for individuals, communities, or domain experts to contribute their perspectives and improve intelligent systems that increasingly impact their lives. As a result, value remains concentrated in the hands of a few infrastructure providers, disconnecting the masses from the upward potential of the AI economy. Only by designing new primitives that can unearth, validate, and reward high-value contributions (data, feedback, alignment signals) can we unlock the participatory growth loop that DeFi thrives on.


In short, we must also ask:


How should we measure the value created? How can we build a self-reinforcing adoption flywheel to drive individual-centric, bottom-up data participation?


To unlock an "AI-native economy" akin to DeFi, we need to define new primitives that translate participation into AI opportunities, thus catalyzing network effects unseen in the field to date.


AI-Native Tech Stack: New Primitives of the New Economy


We are no longer just transferring tokens between wallets but rather feeding data into models, transforming model outputs into decisions, and having AI agents take action. This requires new metrics and primitives to quantify intelligence and alignment, much like DeFi metrics quantify capital. For example, LazAI is building the next-generation blockchain network, addressing the AI data alignment issue by introducing a new asset standard for AI data, model behavior, and agent interactions.


The following outlines several key primitives defining the on-chain AI economic value:


Verifiable Data (New "Liquidity"): Data to AI is like liquidity to DeFi — the lifeblood of the system. In AI (especially large models), having the right data is crucial. But raw data may be of low quality or misleading, necessitating on-chain verifiable high-quality data. One possible primitive here is "Proof of Data (PoD)/Proof of Data Value (PoDV)." This concept would measure the value of data contributions based not only on quantity but also quality and their impact on AI performance. It can be seen as the equivalent of liquidity mining: contributors providing valuable data (or labels/feedback) will be rewarded based on the value their data brings. Early designs of such systems have already emerged. For instance, a blockchain project's Proof of Data (PoD) consensus treats data as the primary resource for validation (similar to energy in Proof of Work or capital in Proof of Stake). In this system, nodes are rewarded based on the quantity, quality, and relevance of the data they contribute.


Expanding this to the General AI Economy, we may see the concept of "Total Data Value Locked (TDVL)" as a metric: an aggregated measure of all valuable network data weighted by verifiability and utility. Verified data pools could even be traded similarly to liquidity pools — for example, a verified medical image pool for on-chain AI diagnosis could have quantifiable value and utilization. Data provenance (understanding data sources, modification history) will be a key part of this metric, ensuring that the data input into AI models is trustworthy and traceable. Fundamentally, if liquidity is about deployable capital, verifiable data is about deployable knowledge. Metrics like Proof of Data Value (PoDV) could capture the amount of useful knowledge locked in the network, and through on-chain data anchoring achieved by LazAI's Data Anchoring Token (DAT), data liquidity becomes a measurable, incentivized economic layer.


Model Performance (a new asset class): In the AI economy, well-trained models (or AI services) become assets in themselves — even being considered a new asset class alongside tokens and NFTs. A well-trained AI model holds value due to the intelligence encapsulated in its weights. But how do we represent and measure this value on-chain? We may need on-chain performance benchmarks or model certifications. For instance, metrics such as accuracy on standard datasets or win rates in competitive tasks could be recorded on-chain as performance scores. This could be seen as an on-chain "credit rating" or KPI for AI models. Such ratings could be adjusted as the model is fine-tuned or updated. Projects like Oraichain have explored combining AI model APIs with reliability scores (validating AI outputs against test cases) on-chain. In AI-native DeFi ("AiFi"), one can envision staking based on model performance — for instance, a developer staking tokens if they believe in their model's performance; if independently audited on-chain validates its performance, they receive a reward (if the model underperforms, they lose the stake). This will incentivize developers to report truthfully and continuously improve their models. Another concept is tokenized model NFTs carrying performance metadata — the "floor price" of a model NFT may reflect its utility. Such practices have started to emerge: some AI markets allow trading model access tokens, and protocols like LayerAI (formerly CryptoGPT) explicitly consider data and AI models as emerging asset classes in the global AI economy. In summary, if DeFi asks "How much capital is locked?", AI-DeFi will ask "How much intelligence is locked?" — referring not only to computational power (though equally important) but also to the efficacy and value of models running in the network. New metrics could include "Model Quality Proof" or a time-series index for on-chain AI performance improvements.


Agency and Utility Behavior (On-Chain AI Agents): One of the most exciting and challenging new elements in AI-native blockchains is the emergence of autonomous AI agents running on-chain. These could be trading bots, data curators, customer service AIs, or complex DAO governors — essentially software entities capable of perceiving, deciding, and acting on behalf of users or even autonomously. The DeFi world only has basic "bots"; however, in the AI blockchain world, agents could become primary economic actors. This has led to a demand for standards around agent behavior, trustworthiness, and utility. We may see mechanisms similar to "Agent Utility Scores" or reputation systems. Imagine each AI agent (possibly represented as an NFT or Semi-Fungible Token (SFT)) accumulating reputation based on its actions (task completion, collaboration, etc.). These ratings are akin to credit scores or user ratings but tailored to AI. Other contracts could then decide whether to trust or use agent services based on this. In LazAI's proposed iDAO (Individual-centric DAO) concept, each agent or user entity has its own on-chain domain and AI assets. One can envision these iDAOs or agents establishing measurable records.


Existing platforms have begun to tokenize AI agents and endow them with on-chain metrics: for example, Rivalz's "Rome protocol" created NFT-based AI agents (rAgents) with their latest reputation metric recorded on-chain. Users can stake or lend these agents, with rewards dependent on the agent's performance and influence within the collective AI "swarm." This is essentially DeFi for AI agents and showcases the importance of agent utility metrics. In the future, we may discuss "active AI agents" much like we discuss active addresses or talk about "agent economic impact" akin to discussing transaction volumes.


An attention trace could become another primitive—a record of what the agent paid attention to during decision-making (which data, signals). This could make black-box agents more transparent, auditable, and attribute the agent's success or failure to specific inputs. In essence, agent behavior metrics will ensure accountability and alignment: to entrust autonomous agents with managing large funds or critical tasks, their reliability needs to be quantified. A high agent utility score could become a prerequisite for on-chain AI agents managing significant funds (similar to how a high credit score in traditional finance is a threshold for large loans).


Aligning incentives with AI metrics: Lastly, the AI economy needs to consider how to incentivize beneficial usage and alignment. DeFi incentivizes growth through liquidity mining, early user airdrops, or fee rebates; in AI, mere usage growth is insufficient—we need to incentivize the use that improves AI outcomes. At this point, metrics tied to AI alignment become crucial. For example, a human feedback loop (such as users rating AI responses or providing corrections through an iDAO, which will be detailed further below) can be recorded, and feedback contributors can earn "alignment rewards." Alternatively, envision "attention proof" or "participation proof" where users investing time in enhancing AI (by providing preference data, corrections, or new use cases) receive rewards. Metrics could be attention traces, capturing high-quality feedback or human attention devoted to optimizing AI.


Just as DeFi requires block explorers and dashboards (like DeFi Pulse, DeFiLlama) to track TVL and yields, the AI economy also needs new browsers to track these decentralized AI metrics—imagine an "AI-llama" dashboard displaying total alignment data, active AI agent numbers, cumulative AI utility rewards, and more. It has similarities with DeFi, but the content is entirely novel.


Towards the DeFi-Style AI Flywheel


We need to build an incentive flywheel for AI — to treat data as a first-class economic asset, transforming AI development from a closed-door affair to an open, participatory economy, much like how DeFi has turned finance into a user-driven open liquidity venue.


Early explorations in this direction have already begun. For example, Vana and other projects have started rewarding users for participating in data sharing. The Vana network allows users to contribute personal or community data to a DataDAO (decentralized data pool) and earn data-specific tokens (exchangeable for the network's native token). This is a crucial step towards monetizing data contributors.


However, merely rewarding contribution is not enough to replicate DeFi's explosive flywheel. In DeFi, liquidity providers not only earn rewards for depositing assets, but their provided assets also have transparent market value, and earnings reflect real-world usage (transaction fees, borrowing interests, and incentive tokens). Similarly, the AI data economy needs to move beyond generic rewards and directly price data. Without economic pricing based on data quality, scarcity, or the extent of model improvement, we may fall into shallow incentives. Merely distributing token rewards for participation may incentivize quantity over quality or lead to stagnation when tokens lack a tangible AI utility link. To truly unlock innovation, contributors need to see clear market-driven signals, understand their data's value, and receive rewards when their data is actively used in AI systems.


We need an infrastructure more focused on directly valuing and incentivizing data to create a decentralized data incentive loop: the more high-quality data people contribute, the better the models, attracting more usage and data demand, thus increasing contributor rewards. This will shift AI from a closed competition for big data to an open market of trusted, high-quality data.


How are these ideas reflected in real-world projects? Take LazAI, for example — the project is building the next-generation blockchain network and foundational primitives for a decentralized AI economy.


LazAI Overview — Aligning AI with Humanity


LazAI is a next-generation blockchain network and protocol designed to address the AI data alignment problem, building the infrastructure for a decentralized AI economy by introducing a new asset standard for AI data, model behavior, and agent interactions.


LazAI provides one of the most forward-looking approaches, solving the AI alignment problem on-chain by making data verifiable, incentivized, and programmable. The following sections will use LazAI's framework as an example to illustrate how an AI-native blockchain puts these principles into practice.

Core Issue - Data Misalignment and Lack of Fair Incentives


AI alignment is often attributed to training data quality, yet the future necessitates new data that is aligned with and governed by humans, trustworthy, and transparent. As the AI industry transitions from centralized general models to contextualized, aligned intelligence, the infrastructure must evolve in parallel. The next AI era will be defined by alignment, precision, and traceability. LazAI addresses the challenges of data alignment and incentives head-on, proposing a fundamental solution: aligning data at the source and directly rewarding the data itself. In other words, ensuring that training data verifiably represents the human perspective, is denoised/debiased, and rewarding based on data quality, scarcity, or the extent to which it improves models. This marks a paradigm shift from patching models to curating data.


LazAI not only introduces primitives but also proposes a new paradigm for data acquisition, pricing, and governance. Its core concepts include Data Anchoring Tokens (DAT) and Individual-Centric DAOs (iDAO), which together achieve data pricing, traceability, and programmable use.


Verifiable and Programmable Data - Data Anchoring Tokens (DAT)


To achieve this goal, LazAI introduces a new on-chain primitive - Data Anchoring Token (DAT), a novel token standard designed for AI data assetization. Each DAT represents on-chain anchored data along with its provenance: contributor identity, evolving history over time, and usage scenarios. This creates a verifiable history for each piece of data - akin to a version control system for datasets (such as Git) but secured by the blockchain. Since DATs exist on-chain, they are programmable: smart contracts can manage their usage rules. For example, data contributors can specify that their DAT (such as a set of medical images) is only accessible to specific AI models or used under specific conditions (enforcing privacy or ethical constraints through code). Incentives are manifested through DAT tradability or staking - if data is valuable to a model, the model (or its owner) may pay for access to the DAT. Essentially, LazAI constructs a market where data is tokenized and traceable. This directly corresponds to the discussed "verifiable data" metric: by inspecting DATs, it can be verified whether they are validated, how many models have used them, and what model performance improvements they bring. Such data will receive higher valuation. By anchoring data on-chain and linking economic incentives to quality, LazAI ensures AI training on trustworthy and measurable data. This tackles the problem through incentive alignment - high-quality data is rewarded and stands out.


Individual-Centric DAO (iDAO) Framework


The second key component is LazAI's concept of iDAO (Individual-Centric DAOs), which redefines the governance model in the AI economy by placing individuals (rather than organizations) at the core of decision-making and data ownership. Traditional DAOs typically prioritize collective organizational goals, inadvertently undermining individual will. iDAOs disrupt this logic. They are personalized governance units that allow individuals, communities, or domain-specific entities to directly own, control, and validate their contributions to AI systems' data and models. iDAOs support customized, aligned AI: as a governance framework, they ensure that models always align with contributors' values or intentions. From an economic perspective, iDAOs also make AI behavior community-programmable - rules can be set to restrict how models use specific data, who can access models, and how model output gains are distributed. For example, an iDAO could stipulate that whenever its AI model is called (such as an API request or task completion), some of the earnings will be returned to the DAT holder who contributed the relevant data. This establishes a direct feedback loop between agent behavior and contributor rewards - similar to mechanisms in DeFi where liquidity provider earnings are tied to platform usage. Moreover, iDAOs can interact with each other through protocols to achieve composability: one AI agent (iDAO) can invoke another iDAO's data or model under negotiated terms.


By establishing these primitives, LazAI's framework turns the vision of a decentralized AI economy into reality. Data becomes an asset owned by users who can benefit from it, models shift from private silos to collaborative projects, and every participant—from individuals curating unique datasets to developers building specialized models—can become a stakeholder in the AI value chain. This incentive alignment is poised to replicate the explosive growth of DeFi: as people realize that engaging in AI (contributing data or expertise) directly translates into opportunity, they will be more actively involved. With more participants, network effects kick in—more data leads to better models, attracting more users, which in turn generates more data and demand, creating a positive feedback loop.


Building an AI Trust Foundation: Verifiable Computing Framework


Within this ecosystem, LazAI's Verifiable Computing Framework serves as the core foundation of trust. This framework ensures that every generated DAT, every iDAO (Individualized Decentralized Autonomous Organization) decision, and every incentive allocation have a verifiable audit trail, making data ownership enforceable, governance processes traceable, and agent behavior auditable. By transforming iDAOs and DATs from theoretical concepts into reliable and verifiable systems, the Verifiable Computing Framework achieves a paradigm shift in trust—from reliance on assumptions to deterministic guarantees based on mathematical verification.


Realizing Value in the Decentralized AI Economy
The establishment of this foundational set enables the realization of the decentralized AI economy's vision:


Data Assetization: Users can claim ownership of data assets and earn returns


Model Collaboration: AI models evolve from closed silos to open collaborative endeavors


Participation Equity: From data contributors to vertical model developers, all participants can become stakeholders in the AI value chain


This incentive-compatible design is poised to replicate the growth momentum of DeFi: as users realize that participating in AI development (through data contribution or expertise) can directly translate into economic opportunities, participation enthusiasm will be ignited. As the scale of participants expands, network effects emerge—more high-quality data leads to better models, attracting more users, and creating more data demand, forming a self-reinforcing growth flywheel.


Conclusion: Toward an Open AI Economy


The journey of DeFi has shown that the right primitives can unlock unprecedented growth. In the upcoming AI-native economy, we are at a similar inflection point. By defining and implementing new primitives that value data and alignment, we can transform AI development from centralized engineering into a decentralized community-driven endeavor. This journey is not without challenges: ensuring that economic mechanisms prioritize quality over quantity and avoiding ethical pitfalls to prevent data incentives from compromising privacy or fairness. But the direction is clear. Practices like LazAI's DAT and iDAO are paving the way, translating the abstract idea of "AI aligned with humans" into concrete mechanisms of ownership and governance.


Just as early DeFi experimented with optimizing TVL, liquidity mining, and governance, the AI economy will also iterate on its new primitives. In the future, debates and innovations surrounding data value measurement, fair reward distribution, AI agent alignment, and the argument for benefits will undoubtedly emerge. This article only scratches the surface of potential incentive models that could drive AI democratization, aiming to stimulate open discussion and in-depth research: How can we design more AI-native economic primitives? What unexpected consequences or opportunities might arise? Through broad community participation, we are more likely to build an AI future that is not only technologically advanced but also economically inclusive and aligned with human values.


The exponential growth of DeFi was not magic—it was driven by incentive alignment. Today, we have the opportunity to catalyze an AI renaissance through similar practices with data and models. By turning participation into opportunity and opportunity into network effects, we can set in motion a flywheel for reshaping the value creation and distribution of the digital age.


Let's collectively build this future—starting from a verifiable data set, an aligned AI agent, and a new primitive.


This article is a contribution and does not represent the views of BlockBeats


0

Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.

PoolX: Locked for new tokens.
APR up to 10%. Always on, always get airdrop.
Lock now!

You may also like

SOL and Meme Coins Are Booming Again

Bitget Academy2025/05/15 05:16
SOL and Meme Coins Are Booming Again

Leading Crypto Presale: Nexchain’s Stage 11 Hits $1.5M with $NEX at $0.042

You can participate in the ongoing Nexchain's presale and gain exposure to one of the leading AI crypto projects before the major exchanges.

Coinomedia2025/05/15 04:55
Leading Crypto Presale: Nexchain’s Stage 11 Hits $1.5M with $NEX at $0.042