Verifiable AI

Tenzro Platform provides cryptographic verification of AI inference through integration with the Canton Network ledger. Every inference can be recorded on-chain with TEE attestation, creating an immutable audit trail.

Overview

Verifiable AI combines three key technologies:

  • TEE Execution - AI models run in Trusted Execution Environments (Intel TDX, AMD SEV-SNP, AWS Nitro)
  • Cryptographic Attestation - Hardware-signed proofs of secure execution
  • Ledger Recording - Immutable on-chain records via Canton Network

How It Works

┌─────────────────┐     ┌─────────────────┐     ┌─────────────────┐
│   Client App    │────▶│   TEE Enclave   │────▶│  Canton Ledger  │
│                 │     │   (AI Model)    │     │                 │
│  1. Request     │     │  2. Inference   │     │  4. Record      │
│                 │◀────│  3. Attestation │◀────│     Proof       │
└─────────────────┘     └─────────────────┘     └─────────────────┘

Recording Inference

After running inference, record the result to the Canton ledger:

import { TenzroPlatform } from '@tenzro/platform';

const platform = new TenzroPlatform({
  apiKey: process.env.TENZRO_API_KEY!,
  tenantId: process.env.TENZRO_TENANT_ID!,
});

// 1. Run inference with attestation
const inference = await platform.ai.infer({
  modelId: 'llama-3.1-70b',
  prompt: 'Analyze transaction for compliance risks',
  maxTokens: 1024,
  attestation: true, // Request TEE attestation
});

console.log('TEE Type:', inference.teeAttestation?.type);
console.log('Enclave:', inference.teeAttestation?.enclaveMeasurement);

// 2. Record to ledger
const proof = await platform.ai.recordInference({
  modelId: inference.modelId,
  inputHash: inference.inputHash,
  outputHash: inference.outputHash,
  attestation: inference.teeAttestation!,
  metadata: {
    purpose: 'compliance-check',
    transactionId: 'tx-123',
  },
});

console.log('Proof ID:', proof.proofId);
console.log('Contract ID:', proof.contractId);
console.log('Recorded at:', proof.timestamp);

Verifying Inference

Anyone with the proof ID can verify the inference was executed correctly:

// Verify a recorded inference
const verification = await platform.ai.verifyInference({
  proofId: 'proof-abc123',
});

console.log('Valid:', verification.valid);
console.log('Model:', verification.proof.modelId);
console.log('Input Hash:', verification.proof.inputHash);
console.log('Output Hash:', verification.proof.outputHash);
console.log('TEE Type:', verification.proof.attestation.type);
console.log('Recorded:', verification.proof.timestamp);

// Verify attestation chain
console.log('Attestation Valid:', verification.attestationValid);
console.log('Ledger Valid:', verification.ledgerValid);

Query Inference History

// Get inference history for your tenant
const history = await platform.ai.getInferenceHistory({
  modelId: 'llama-3.1-70b', // Optional filter
  startTime: new Date('2026-01-01'),
  endTime: new Date(),
  limit: 100,
});

for (const record of history.records) {
  console.log(`[${record.timestamp}] Model: ${record.modelId}`);
  console.log(`  Input: ${record.inputHash}`);
  console.log(`  Output: ${record.outputHash}`);
  console.log(`  TEE: ${record.attestation.type}`);
  console.log(`  Proof: ${record.proofId}`);
}

React Integration

import {
  TenzroProvider,
  useInfer,
  useRecordInference,
  useInferenceHistory,
  useVerifyInference,
  VerifiableAIDashboard,
} from '@tenzro/platform-ui';

function VerifiableAIDemo() {
  const { infer, result, loading } = useInfer();
  const { record, loading: recording } = useRecordInference();
  const { verify, result: verification } = useVerifyInference();
  const { history, refresh } = useInferenceHistory();

  const handleInferAndRecord = async () => {
    // Run inference
    const result = await infer({
      modelId: 'llama-3.1-70b',
      prompt: 'Analyze this transaction...',
      attestation: true,
    });

    // Record to ledger
    if (result?.teeAttestation) {
      await record({
        modelId: result.modelId,
        inputHash: result.inputHash,
        outputHash: result.outputHash,
        attestation: result.teeAttestation,
      });
      refresh(); // Update history
    }
  };

  return (
    <div>
      <button onClick={handleInferAndRecord} disabled={loading || recording}>
        {loading ? 'Running...' : recording ? 'Recording...' : 'Run & Record'}
      </button>

      {/* Or use the pre-built dashboard */}
      <VerifiableAIDashboard
        showStats={true}
        showHistory={true}
        onVerify={(proof) => verify({ proofId: proof.proofId })}
      />
    </div>
  );
}

Ledger Proof Structure

interface LedgerInferenceProof {
  // Unique proof identifier
  proofId: string;

  // Canton contract reference
  contractId: string;

  // Model information
  modelId: string;
  modelVersion: string;

  // Cryptographic hashes
  inputHash: string;    // SHA-256 of input
  outputHash: string;   // SHA-256 of output

  // TEE attestation
  attestation: {
    type: 'intel_tdx' | 'amd_sev_snp' | 'aws_nitro';
    enclaveMeasurement: string;
    rawAttestation: string;
    timestamp: string;
  };

  // Ledger metadata
  tenant: string;
  party: string;
  timestamp: string;
  domain: string;

  // Optional metadata
  metadata?: Record<string, string>;
}

Use Cases

Regulatory Compliance

Record AI-driven compliance decisions for audit trails:

const decision = await platform.ai.infer({
  modelId: 'compliance-analyzer',
  prompt: `Review transaction: ${JSON.stringify(transaction)}`,
  attestation: true,
});

await platform.ai.recordInference({
  ...decision,
  metadata: {
    transactionId: transaction.id,
    regulationType: 'AML',
    decisionType: 'automated',
  },
});

Financial Analysis

Prove AI recommendations were generated correctly:

const analysis = await platform.ai.infer({
  modelId: 'portfolio-analyzer',
  prompt: `Analyze portfolio risk: ${JSON.stringify(portfolio)}`,
  attestation: true,
});

const proof = await platform.ai.recordInference({
  ...analysis,
  metadata: {
    portfolioId: portfolio.id,
    analysisType: 'risk-assessment',
    clientId: client.id,
  },
});

// Share proof with client
console.log('Verification link:', `https://verify.tenzro.com/${proof.proofId}`);

Multi-Party Verification

Allow multiple parties to verify shared AI analysis:

// Record with multiple observers
const proof = await platform.ai.recordInference({
  ...inference,
  observers: [
    'party::counterparty::auditor',
    'party::regulator::inspector',
  ],
});

// Each observer can independently verify
// using their own Canton ledger access

Best Practices

  • Always request attestation for sensitive inference operations
  • Include relevant metadata to provide context for audits
  • Store proof IDs alongside your application data for easy lookup
  • Set appropriate observers for multi-party scenarios
  • Verify proofs periodically to ensure ledger integrity

API Reference

MethodDescription
ai.infer()Run inference with optional attestation
ai.recordInference()Record inference to Canton ledger
ai.verifyInference()Verify a recorded inference
ai.getInferenceHistory()Query inference history