Netrum Docs Logo
Developer

Developer

Developer Program and Developer Hub

  • The Developer Hunter Program, the Developer Pool and the operational framework that governs developer engagement during Testnet and beyond. It is written for product leaders, developer relations teams, investors and senior engineers.
  • The Netrum Developer Hub is the canonical platform for building AI-native Web3 solutions on Netrum. It provides SDKs, APIs, documentation, sandbox environments and a structured program to recruit and reward elite contributors.
  • The Developer Hunter Program and the capped Developer Pool form a tightly governed experiment in quality-first community building. Participation provides technical early-access, economic upside in NPT, reputational credit on an on-chain profile, and direct influence over API standards and protocol priorities.

Developer Hub

  • Core capabilities

  • • Fully documented APIs: JSON-RPC passthrough, high-level REST endpoints, intent and attestation services, simulation and deployment endpoints.

    • SDKs and plugins: JavaScript, TypeScript, Python, Go, Unity, and Unreal.

    • Sandbox environment: Free demo APIs and testnet endpoints for prototyping without financial cost.

    • Paid production tier: Scalable endpoints, higher quotas, priority support and enterprise integrations with optional fiat or crypto billing.

    • Observability and logs: Per-request trace_id, replayable traces and downloadable audit bundles.

    • Template library: Contract templates, dApp scaffolds, bot starters and merchant SDKs.
  • Free demo APIs
  • Rapid prototyping, risk-free experiments and onboarding. Limits: reduced rate limits, no attestation anchoring and sandbox tokens.
  • Paid APIs
  • Production throughput, attested responses, priority routing and enterprise security. Billing: flexible with crypto or fiat, metered by usage, attestation requests and priority throughput.

Developer Hunter Program and Developer Pool

  • The Developer Hunter Program is a staged, selective engagement engine. The program combines quality filters, merit gating, and economic incentives to attract high-signal contributors who will stress, extend and harden the protocol.
  • Cohort sizing and phases

  • • Phase 1: 100 hand-selected developers.

    • Phase 2: +150 additional developers.

    • Phase 3: +250 additional developers.

    • Total cohort cap: 500 active developers during Testnet.
  • Selection and access

  • • Apply via the Dev-Hunter form. Submissions include a resume, GitHub, past projects and a short technical task.

    • Selection criteria: technical competence, prior Web3 or AI experience, community reputation, and alignment with project goals.

    • Meritocratic access: seats are scarce and awarded to those likely to produce maximal value.

Developer roles, responsibilities and expectations

  • Role categories

  • • Integrator: builds dApps, bots, merchant integrations and SDK wrappers.

    • Security analyst: finds reproducible vulnerabilities and triages them with remediation proposals.

    • Performance engineer: stress tests nodes, APIs and inference pipelines.

    • UX tester: validates flows, documentation and the developer experience.

    • Researcher: explores novel AI + Web3 use cases and submits design proposals.
  • Core expectations

  • • Deliver reproducible artifacts for every bug or test case. Artifacts may include test scripts, traces, sandbox deployments and minimal repro repos.

    • Provide structured feedback in issue templates and RFC-style API proposals.

    • Participate in periodic community reviews, demo days and technical syncs.

    • Respect coordinated disclosure and bounty rules for security findings.

Developer Tasks During Testnet

  • Role categories
  • The Developer Tasks program for Netrum Testnet is a deliberate, high-signal engagement framework. It is not a checklist. It is a co-creative mechanism that converts developer effort into protocol quality, robust public goods and long-term stakeholder value.
  • 1. The Why - mission-level intent behind the tasks
  • Testing at the level we require is not about confirming that an endpoint returns a response. It is about stress testing imagination, surfacing edge-case behaviour, exposing real-world failure modes and turning those discoveries into durable improvements.
  • The Testnet is also the first market for novel AI x Web3 use cases. Developer contributions will define API ergonomics, surface security issues, validate merchant flows, and build the early applications that demonstrate Netrum’s product-market fit.
  • 2. Task categories and expected deliverables
  • Each task category includes objectives, expected artifacts, and quality criteria.
  • 2.1 Testing AI Modules & APIs
  • Probe the Intent Compiler, AI modules and API surface to reveal correctness, robustness and failure modes.
  • Expected deliverables:

  • • Structured test cases that include input, expected output, and observed output.

    • Edge-case scenarios and adversarial prompts for NLP / intent components.

    • Repro scripts and telemetry traces (trace_id, input audio, transcript, intent JSON, simulation output).

    Quality criteria:

    • Reproducible steps.

    • Clear description of risk or failure.

    • Suggested remediation or hypothesis for root cause.
  • 2.2 Building Small dApps & Bots
  • Validate integration flows, latency, and real-world usability by building minimum viable connectors and bots.
  • Typical projects:

  • • Telegram or Discord bot that exposes wallet balances, intents, or a testnet faucet.

    • Merchant demo integrating a voice checkout in a simple storefront.

    Deliverables:

    • Source repository with README, quickstart and deployed demo link.

    • Load profile and latency metrics under target usage.

    • User flows that demonstrate a production usage scenario.
  • 2.3 Reviewing Products & Structured Feedback
  • Provide sharp, actionable product feedback that improves UX, docs and onboarding.
  • Deliverables:

  • • Structured reviews using a template (page, step, problem, reproduction, suggested text or design).

    • Prioritised backlog suggestions tagged by impact and effort.

    Quality criteria:

    • Clear reproduction.

    • Concrete recommendation, not only critique.
  • 2.4 Reporting Bugs & UX Issues
  • Find security, performance and UX defects before mainnet.
  • Deliverables:

  • • Formal bug report with reproduction steps, artifacts and a severity estimate.

    • Optional PoC code or minimal failing test.

    Quality criteria:

    • Reproducibility and minimal demo.

    • Attach trace ids, logs and any sandbox states.
  • 3. The Developer Profile - immutable reputation ledger
  • Every contributor earns a verifiable public profile that records validated contributions, badges and on-chain claim receipts. This profile is:

  • • Persistent and public, suitable as an on-chain resume.

    • Used for program eligibility, grant prioritisation and governance weight in developer-focused proposals.

    • Updated with each validated submission and anchored for audit.
  • Profile elements:

  • • Wallet address and public handle.

    • Contribution ledger with timestamps, categories and verified artifacts.

    • Leaderboard rank and earned badges.

    • Cumulative NPT awarded and vesting status.
  • 4. Reward Mechanism - value-based incentives
  • Rewards are not hourly pay. They are aligned, value-based compensation denominated in NPT. Key principles:

  • • Value alignment. Rewards are commensurate with the impact of the contribution.

    • Transparency. Reward schedules and mappings are published.

    • Anti-abuse. On-chain receipts, manual review, and slashing for fraudulent claims.

    • Vesting. Larger grants are escrowed and vested to align long-term incentives.
  • Payout process:

  • • Submission evaluated and scored.

    • If validated, micro rewards are credited; larger grants enter conversion escrow with vesting terms.

    • All rewards produce on-chain claim receipts and entries in the developer profile.
  • 5. Dev-Hunter Gate - meritocracy and anti-Sybil controls
  • The Dev-Hunter registration form is a deliberate quality filter. Gate mechanics:

  • • Application fields require links to prior work, GitHub, and a short technical task.

    • Seat allocation follows technical review and reputation heuristics.

    • For high-value grants, optional KYC applies.
  • Anti-Sybil features:

  • • Single-account policies enforced by cross-checks: IP heuristics, submission quality, and time-based reputation accrual.

    • Reputation thresholds for high-value reward eligibility.

    • Manual curation for disputed or borderline cases.
  • 6. Developer Pool - curated cohort structure and mission
  • The capped Developer Pool of 500 builders is a strategic cohort designed for quality and manageability. Dual mission:

  • • Horizontal stress testing. Run the system across real-world stacks and workflows to reveal integration issues.

    • Vertical innovation. Fund experimentation and disruptive use cases that stretch the protocol in imaginative ways.
  • Cohort logistics:

  • • Rolling onboarding across phases: 100 → +150 → +250.

    • Access tiers: core Hunters, extended Hunters, community contributors.

    • Regular demo days, technical syncs, and curated bounties.
  • 7. The Horizontal Stress Test - scope and sample questions
  • This programmatic test surface is broad and pragmatic. Questions testers answer:

  • • How does the AI behave when proxied by a Web2 e-commerce checkout?

    • Do node sync and RPC semantics remain stable across wallet implementations?

    • Can non-crypto merchants complete a settlement workflow with only the merchant UI?
  • Deliverables:

  • • Interop reports, error surface maps and reproducible test suites.
  • 8. The Vertical Innovation Push - experiment design
  • This is a funded exploration phase. Expected outputs:

  • • Novel prototypes that push the boundaries of AI x Web3 (for example, voice-native governance agents, autonomous merchant concierges, or AI-assembled DAOs).

    • Innovation grant applications with milestone-based funding.

    • Public experiments that surface new standards and template primitives.
  • Evaluation:

  • • Potential impact and market fit.

    • Technical feasibility and safety.

    • Reusability as a canonical integration or SDK.