πŸ›‘οΈ Defense-in-Depth

Six Layers Between AI Fraud
and Your Human Art

We think like attackers so you don't have to. KRONYQL's upload pipeline makes uploading AI content as expensive, difficult, and detectable as possible β€” without burdening legitimate human artists.

🎯 First Principles: The Attacker's Mindset
πŸ€– Upload AI-generated audio
Pass off MusicLM, Jukebox, or Suno-generated tracks as human-made to exploit the marketplace
πŸ”“ Bypass automated detection
Add "human-like" artifacts β€” micro-timing jitter, noise, imperfections β€” to evade spectral analysis
🌊 Flood with cheap AI tracks
Devalue human art or game the platform credit system with high volume, low-effort content
πŸ‘₯ Spoof metadata & identities
Use fake accounts, synthetic identities, and multiple personas to scale the attack
🎯
Our goal: Raise the cost of a successful attack so high that it's economically irrational. Security by layers, incentives, and friction β€” not a single silver bullet.

The Six Layers

Each layer catches what the previous one missed. All six run before a file ever reaches the marketplace.

1
First Line of Defense
Identity & Provenance
β–Ύ
Purpose: Prove the uploader is a real human with a legitimate creative practice. An attacker needs to build a credible identity for each account β€” time-consuming and expensive.
πŸ‘‘ Verified Sovereign Tier
Only accounts that pass identity verification can upload. Professional email, social proof (Spotify, Bandcamp, Instagram), phone verification.
🀝 Referral Trust Network
New artists can be referred by an existing Verified Sovereign β€” building a web of trust that's expensive to fake.
⏳ Rate Limiting
Unverified accounts: 1 upload per week. Trust increases with verification level, tenure, and community standing.
⚠️ Strike System
If a user ever uploads AI content (detected later), they are permanently banned and all their uploads are removed. Massive disincentive.
🎯 Attack: Creates hundreds of fake accounts with synthetic identities
πŸ›‘οΈ Defense: Phone verification + social proof + referral network makes scaling prohibitively expensive
2
Cryptographic Proof
Source Proof
β–Ύ
Purpose: Offer artists a way to cryptographically prove their audio was captured from a real performance. Optional but strongly encouraged β€” gives a massive boost to verification score and unlocks premium features.
πŸ“± Recording Device Signature
Mobile app signs audio files with a private key stored in the phone's secure enclave (TEE). Proves the file was recorded by that device at a specific time.
πŸŽ›οΈ Session Hash (DAW Plugin)
For studio artists: a plugin generates a hash of the session state (track count, MIDI data, etc.) signed by the artist's key at moment of export.
πŸŽ₯ Video Anchorβ„’
Low-tech alternative: record a short video of yourself performing the piece. Face detection + audio alignment verifies the video matches the upload.
πŸ“ˆ Score Boost
Source proofs unlock leaderboards, voting rights, and Verified badge. Over time, most serious artists adopt them β€” making unproven uploads easy to monitor.
🎯 Attack: Generates audio purely in software β€” no physical recording chain
πŸ›‘οΈ Defense: Cryptographic signing proves audio originated from a real device at a real time β€” impossible to replicate from a text prompt
3
Rights & Ownership
Ownership Verification
β–Ύ
Purpose: Confirm the uploader actually owns (or has rights to) the content they're uploading. Identity (Layer 1) proves who you are. Source Proof (Layer 2) proves the recording is real. This layer proves you have the right to distribute it.
✍️ Signed Ownership Attestation
Every upload requires a digitally signed declaration: "I own or have legal rights to distribute this work." This attestation is timestamped, immutable, and legally binding β€” creating personal liability for fraudulent claims.
πŸ”’ ISRC / ISWC Cross-Reference
If the track has an International Standard Recording Code (ISRC) or International Standard Musical Work Code (ISWC), we verify it against global registries. Mismatched codes flag the upload for manual review.
🀝 Collaborator Confirmation
If splits are declared (co-writers, producers, featured artists), all named collaborators receive a confirmation request. The track enters a 48-hour hold until all parties confirm their share β€” or the uploader certifies sole ownership.
πŸ—„οΈ Catalog Database Matching
Audio fingerprint is cross-referenced against existing catalog databases (MusicBrainz, ASCAP, BMI, SESAC, SoundExchange) and every track already on KRONYQL. If a match is found, the upload is held and the original rights holder is notified.
⏳ Dispute Window
Every new upload enters a 7-day dispute window. During this period, existing rights holders anywhere on the platform can flag a conflict. Disputed tracks are frozen until resolution β€” protecting original creators.
πŸ“œ Chain of Title
For label and publisher vaults: full chain-of-title documentation is required β€” master rights, sync rights, mechanical rights. Every link in the chain is recorded on the Sovereign Ledger for permanent, auditable proof.
🎯 Attack: Uploads someone else's music and claims ownership β€” common on other platforms with no verification
πŸ›‘οΈ Defense: Signed attestation creates legal liability, fingerprint matching catches duplicates, collaborator confirmation prevents unauthorized splits, and the dispute window gives rights holders a voice before the track goes live
4
Pattern Recognition
Behavioral Analysis
β–Ύ
Purpose: Monitor user behavior for signs of automation and fraud. All behavioral flags feed into a risk score β€” high-risk uploads are routed to manual review.
⏱️ Upload Rate Monitoring
Limit to 1 track per hour for new accounts. Escalating limits based on trust level. Burst uploads trigger automatic review queue.
πŸ“‹ Metadata Anomaly Detection
AI tracks often have generic titles, no album art, or metadata that's "too perfect" (no typos, uniform formatting). Statistical anomalies get flagged.
πŸ•ΈοΈ Network Analysis
IP addresses, browser fingerprints, and payment methods cross-referenced against known fraud patterns and linked accounts.
🍯 Honeypot Traps
Invisible "trap" files embedded in the platform. Only automated scrapers would interact with them β€” any account that does gets immediately flagged.
🎯 Attack: Uses scripts to upload many tracks quickly, overwhelming detection
πŸ›‘οΈ Defense: Rate limits + behavioral scoring + honeypots catch automation patterns before content is even analyzed
5
Fast Heuristics
Content Forensics (Light)
β–Ύ
Purpose: Quick, cheap heuristics to reject obvious AI before the expensive deep analysis runs. Catches low-hanging fruit β€” if confidence > 95%, file is rejected immediately with appeal link.
πŸ“Š Frequency Spectrum Anomalies
AI often leaves characteristic spectral peaks or missing frequencies that are invisible to the ear but visible to analysis. Fast FFT scan catches obvious AI signatures.
πŸ”‡ Noise Floor Consistency
Human recordings have natural background noise variations. AI often produces an unnaturally flat or uniform noise floor β€” a dead giveaway.
πŸŒ€ Phase Coherence Check
Lightweight version of the full Dunbar phase-smear detection. Catches phase artifacts that generative models introduce when synthesizing stereo audio.
πŸ₯ Tempo Stability Analysis
Robotic quantization vs. human micro-timing. Real drummers (like Aynsley Dunbar) have natural fluctuations that are extremely hard to simulate perfectly.
🎯 Attack: May have used AI but didn't try hard to hide it
πŸ›‘οΈ Defense: Fast heuristics catch 80%+ of unmasked AI at near-zero compute cost β€” instant rejection saves expensive deep analysis
6
Deep Forensic Analysis
The Dunbar Calibrationβ„’
β–Ύ
Purpose: The final arbiter. Full forensic analysis running transient analysis, automation variance, and phase-smear detection on every file that passes the light heuristics. Outputs a confidence score (0–100%).
⚑ Transient Analysis
Examines attack transients of every note β€” the microsecond-level dynamics that define how a real musician strikes, plucks, or breathes.
🎚️ Automation Variance
Maps the organic ebb and flow of volume, pitch, and timbre across the performance. AI tends toward statistical regularity that humans never achieve.
πŸŒ€ Phase-Smear Detection
Deep analysis of stereo phase relationships. Generative models leave phase artifacts that are invisible to basic checks but unmistakable under forensic scrutiny.
🧬 Adversarial Training
The Dunbar model is continuously retrained against the latest AI generators. When they evolve, we evolve. Arms race by design.
🎯 Attack: High-quality AI with added human artifacts and adversarial noise
πŸ›‘οΈ Defense: Holistic forensic analysis across multiple dimensions β€” even if one metric is fooled, the aggregate signature exposes the forgery

The Upload Flow

πŸ”‘
1. Artist Logs In
Verified email at minimum. Identity score, behavioral risk, and rate limits are checked in the background (Layers 1–2).
πŸ“€
2. File Uploaded
Track title, album art, and optional source proof (video, signed file, session hash) submitted through the Vault Upload Portal.
οΏ½
3. Ownership Verification (Layer 3)
Signed attestation, ISRC/ISWC cross-reference, collaborator confirmation, and catalog matching run automatically. Disputes trigger a 72-hour review window.
οΏ½πŸ”
4. Light Forensics (Layer 5)
Fast heuristics run immediately: spectrum, noise floor, phase, tempo. If flagged (>95% AI confidence) β†’ instant rejection with appeal link. If passes β†’ proceed.
🧬
5. Dunbar Calibrationβ„’ (Layer 6)
Full forensic analysis runs asynchronously. Transient analysis, automation variance, phase-smear detection. Score generated in seconds.
πŸ“‹
6. Score & Publish
Score determines outcome: >90% = Human Verified badge. 70–90% = Published with "Pending" badge. <70% = Hidden, artist notified to appeal.

Dunbar Score Outcomes

> 90%
βœ“ Human Verified
Published with Full Badge
Qualifies for leaderboards, voting rights, The Barristerβ„’ protection, and Sovereign tier advancement.
70–90%
⏳ Pending Verification
Published with Notice
Track is live but does not qualify for Verified badge, leaderboards, or voting rights. Artist can submit additional proof.
< 70%
🚫 Flagged
Hidden β€” Appeal Required
Track is hidden from public. Artist is notified and can appeal with video, session file, or live performance challenge.

Attack Vector β†’ Defense Matrix

Attack Vector Defense
High-quality AI with human artifacts Dunbar's deep forensic analysis catches subtle phase, timing, and automation patterns across multiple dimensions simultaneously
Multiple fake accounts at scale Identity verification + referral network + phone verification + strike system makes scaling prohibitively expensive
Uploading stolen human recordings Source proof (video, session hash) ties the artist to creation. Original artist can dispute with their own proof
Adversarial noise to bypass heuristics Light heuristics are only the first pass β€” deep Dunbar catches adversarial noise because it analyzes holistic performance
Flooding with borderline AI content Files β‰₯20% AI are rejected outright β€” you'd have made $0 anyway. Below 20%, the AI Tax (5% per 1% AI) makes it financially pointless β€” at just 10% AI, half your revenue is gone, plus it damages your verified status, rank progression, and governance power
Exploiting the appeal system Each appeal requires manual review + additional proof (video, session, live challenge). Costly and risky β€” video can be analyzed for AI generation too

Built to Protect Human Art

Serious artists can easily prove their humanity. Files β‰₯20% AI are rejected at upload β€” you'd have made $0 anyway. Below that, the 5Γ— AI Tax multiplier (5% revenue per 1% AI) makes AI content financially pointless well before the rejection threshold.