Opening — Why this matters now

DeFi is no longer the experimental playground it was in 2020. It is an always-on, adversarial, liquidity-saturated environment where billions move across autonomous code. Yet beneath this supposed transparency lies a human opacity problem: we still don’t know why people perform the transactions they do. The chain is public; the intent is not.

In an ecosystem where one misread approval call can spiral into an eight-figure hack, understanding intent isn’t a luxury — it’s risk hygiene. The paper “Know Your Intent: An Autonomous Multi-Perspective LLM Agent Framework for DeFi User Transaction Intent Mining” fileciteturn0file0 tackles this head-on with an ambitious idea: a multi-agent, self-reflective LLM framework that studies on-chain behavior like an investigative analyst with infinite patience.

Background — From signatures to semantics

On-chain data is famously unforgiving. Smart contract logs are hex strings masquerading as structure; transactions often represent multi-step strategies, cross-contract flows, or off-chain‑driven actions. Humans piece this together using Etherscan, DeFi dashboards, a bit of intuition, and a lot of coping.

Existing approaches — rule-based heuristics, supervised models, even graph methods — handle structure but not semantics. They detect “what happened.” They rarely explain “why.” The gap is especially glaring for:

  • Composite actions (multi-step swaps, approvals, staking)
  • Off-chain drivers (market volatility, macro events, KOL noise)
  • Behavior chains (sequences revealing strategies rather than isolated calls)

This paper argues that intent cannot be inferred from a single transaction or isolated attributes. Instead, intent must be constructed through coordinated perspectives — much like how seasoned MEV researchers work.

Analysis — How TIM breaks down the black box

The proposed Transaction Intent Mining (TIM) framework is a structurally deterministic, multi-agent LLM system. It does three clever things:

1. Plans the analysis — meta-cognition, not brute force

A Meta-Level Planner (MP) reads the raw transaction and dynamically selects multiple analytical lenses such as:

  • Smart contract semantics
  • Temporal context
  • Market and macro conditions
  • Protocol background
  • Potential attack patterns

This replaces brittle rule-based pipelines with adaptive cognitive routing.

2. Decomposes complexity using domain-expert agents

Each perspective spawns a Domain Expert (DE) — effectively a synthetic specialist.

DEs:

  • Break down the intent problem into sub-questions
  • Request data (ABI, trace, historical actions, market data)
  • Create structured task lists

Their outputs capture interpretation, not just extraction.

3. Executes granular reasoning through Question Solvers (QS)

QS agents operate under a “single responsibility” principle. They:

  • Retrieve live on/off‑chain data
  • Perform reflective reasoning (ReAct-style)
  • Pass memory forward across subtasks

This prevents hallucinated leaps and enforces evidence-driven reasoning.

4. Validates everything with a Cognitive Evaluator (CE)

The CE is the skeptic of the system. It filters claims based on:

  • Evidence verifiability (traceable, factual, reproducible)
  • Intent relevance (does this actually support the claimed motive?)

Only intents surviving scrutiny make it into the final output.

A visualization — TIM’s cognitive assembly line

Layer Role Core Function
Meta-Level Planner Cognitive router Builds perspectives; tailors agent plan
Domain Experts Decomposers Create question chains; contextual reasoning
Question Solvers Executors Fetch, interpret, reflect on multi-modal data
Cognitive Evaluator Auditor Validates reasoning; prunes hallucinations

The result: a machine that doesn’t just detect intent — it argues it.

Findings — Does it actually work?

The experiments span 600 expertly labeled Ethereum transactions across 21 intent subclasses.

TIM vs. ML models and LLM baselines

TIM decisively outperforms traditional classification techniques:

Method F1-micro
Naive Bayes 0.49
SVM 0.54
XGBoost 0.61
CNN 0.62
Single LLM 0.30
Single Agent 0.41
TIM 0.75

Static models confuse ubiquitous actions (e.g., spot trading profit) with rarer ones (e.g., arbitrage). They also miss multi-step or context-driven intents.

TIM’s multi-perspective decomposition is the differentiator.

Per-label performance: strengths and blind spots

TIM excels when on-chain semantics directly imply intent:

  • Voting (A18) — F1 = 1.00
  • Delegated voting (A20) — F1 = 1.00
  • Standard staking (A9) — 0.89

It struggles when intent is ambiguous or off-chain dependent:

  • Airdrop farming (A11) — 0.29
  • Stop-loss strategies (A16) — 0.47
  • Hedging (A17) — 0.43

These are precisely the categories where human interpretation also fractures.

Ablations confirm necessity

Removing any module reduces performance:

Removed Component F1-micro
w/o Meta Planner 0.63
w/o Domain Experts 0.44
w/o QS 0.31
w/o Cognitive Eval 0.55

Whether the model is Grok, GPT‑4o, or Qwen, TIM’s architecture, not the base LLM, makes the difference.

Implications — Why business and regulators should care

1. Risk monitoring moves from indicators to intentions

Imagine compliance workflows where alerts are triggered not by raw patterns but inferred motives:

  • Is this user positioning for a liquidation exploit?
  • Is this cluster preparing an airdrop sybil attack?
  • Is this borrower hedging or looping risk?

Intent-aware surveillance is categorically more powerful.

2. DeFi product teams gain visibility into real user behavior

Protocol design often flies blind. Intent mining enables:

  • Insight into strategy patterns
  • Identification of UX friction points
  • Early detection of misaligned incentives

3. Governance becomes evidence-based

Rather than guessing motivations behind votes or liquidity moves, DAOs can read aggregated intent flows.

4. AI agent ecosystems will depend on frameworks like TIM

As autonomous trading and execution agents proliferate, we need transparent, verifiable reasoning. TIM’s architecture doubles as a safety scaffold.

Conclusion

Intent is the missing semantic layer in DeFi. This paper offers a compelling blueprint for recovering it — not through heuristics, but through coordinated, self-reflective agents that demand evidence at every turn.

For an industry that often celebrates “don’t trust, verify,” this work subtly flips the script: verify the why, not just the what.

Cognaptus: Automate the Present, Incubate the Future.