AI Audit Trail Developer Guide: EU AI Act & HIPAA Compliance
Modern AI applications require comprehensive audit trails to meet regulatory requirements like the EU AI Act, HIPAA, and FINRA. This guide shows you how to implement production-ready AI audit logging with Traceprompt, including TypeScript examples and compliance patterns you can use today.
Who Needs This Guide
If you build or operate AI applications in regulated industries - healthcare, finance, government, education, or HR - you need audit trails that can prove:
- What happened: Complete record of AI interactions, model versions, and parameters
- Who did it: User identity, session context, and authorization details
- When it occurred: Precise timestamps with tamper-evident chronology
- Data integrity: Cryptographic proof that logs haven't been altered
What Traceprompt Logs
Traceprompt automatically captures everything needed for compliance audits:
What Traceprompt Logs
Category | Data Captured |
---|---|
Identity & Context | User ID, session info, app version, environment, timestamps, IP addresses |
Model Details | Provider (OpenAI, Anthropic, etc.), model name/version, parameters (temperature, tokens) |
Content & Privacy | Encrypted prompts/responses, PII detection, risk classification, content hashes |
Integrity & Governance | BLAKE3 cryptographic hashes, Merkle tree proofs, retention policies, data classification |
Quick Start Implementation
Here's how to add audit logging to your existing LLM application with minimal code changes:
1. Install and Configure
# Install the SDK
npm install @traceprompt-node
# Or with yarn
yarn add @traceprompt-node
Create a configuration file:
// .tracepromptrc.yml
apiKey: tp_live_xxxxx
# Optional: add static metadata to all logs
staticMeta:
app: "my-ai-service"
env: "production"
version: "1.2.0"
2. Wrap Your LLM Calls
import { init, wrap } from "@traceprompt-node";
import OpenAI from "openai";
// Initialize Traceprompt once
await init();
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
// Wrap your existing LLM function
const auditedChat = wrap(
(prompt: string) => openai.chat.completions.create({
messages: [{ role: "user", content: prompt }],
model: "gpt-4o",
temperature: 0.7,
}),
{
modelVendor: "openai",
modelName: "gpt-4o",
userId: "user_12345", // Your user identifier
}
);
// Use exactly as before - your app doesn't change!
const response = await auditedChat("Analyze this patient data...");
console.log(response.choices[0].message.content);
3. Supported LLM Providers
Traceprompt works with all major LLM providers:
// OpenAI
const openaiChat = wrap(openaiCall, {
modelVendor: "openai",
modelName: "gpt-4o",
userId: userId
});
// Anthropic
const anthropicChat = wrap(anthropicCall, {
modelVendor: "anthropic",
modelName: "claude-3-sonnet",
userId: userId
});
// Local models
const localChat = wrap(localModelCall, {
modelVendor: "local",
modelName: "llama-3.1-70b",
userId: userId
});
// Other providers: gemini, mistral, deepseek, xai
What You'll See in the Dashboard
After you have installed the SDK and your app is running, you will see a stream of logs in our platform:

The dashboard provides real-time visibility into your AI interactions, including:
- High/Critical Interactions: Automatically flagged based on PII detection and risk assessment
- PII Exposure Analysis: Detailed breakdown of detected sensitive information (names, phone numbers, bank details, etc.)
- Audit Activity: Track decryptions, audit pack generation, and verification activities
- Complete Interaction Logs: Timestamps, models, user IDs, detected PII, and risk levels for every AI call
Cryptographic Integrity
Traceprompt uses multiple layers of cryptographic protection to ensure audit trail integrity:
BLAKE3 Hashing
Each audit event gets a unique cryptographic fingerprint:
import { blake3 } from "@napi-rs/blake-hash";
// How Traceprompt computes leaf hashes
function computeLeaf(data: string | Buffer): string {
return blake3(data).toString("hex");
}
// Each event gets hashed for tamper detection
const eventHash = computeLeaf(JSON.stringify(auditEvent));
Public Anchoring
Merkle roots are published to a public GitHub repository for independent verification:
GitHub Anchor Format
# In traceprompt/open-anchors repository # File: 2025-01-15.csv timestamp,org_id,batch_id,leaf_count,merkle_root_hex 2025-01-15T14:30:15.123Z,org_abc123,1001,250,8376966c62452b8b623b131b8af3a18d34ff65d9445c52c3b1c19e2c1f9f5b9f 2025-01-15T14:45:22.456Z,org_def456,1002,180,7265855b51341a7a512a020a7ae2f07c23ee54c8334b41b2a0f08e1b0e8e4a8e
All commits are GPG-signed and publicly verifiable. Auditors can independently verify any batch without trusting Traceprompt.
View open-anchors repositoryFrequently Asked Questions
Next Steps
Ready to implement compliant AI audit trails? Here's what to do next:
- Sign up for Traceprompt: Create your account and get your API keys
- Set up AWS KMS: Configure your own encryption keys for maximum security
- Install the SDK: Add @traceprompt-node to your project
- Start small: Wrap one LLM call and verify it works
- Scale up: Apply to all AI interactions in your application
- Test audit packs: Generate and verify your first audit package
Need Help?
Our team can help you implement compliant AI audit trails for your specific use case and regulatory requirements.
Get Implementation Support