Recently, the @SentientAGI team submitted another hardcore paper at the Lock-LLMs workshop of NeurIPS 2025 - the paper titled "OML: Cryptographic Primitives for Verifiable Control in Open-Weight LLMs" proposes a new method for verifiable control of open-source large models: OML (Open Model License / Ownership Marking Layer).



The core highlights are very intuitive: embedding control logic into the model inference pipeline allows open source models to run safely and verifiably. Its three-layer design is impressive:
1️⃣ Verifiability: Zero-knowledge proofs ensure the legality of each call;
2️⃣ Mandatory: TEE (Trusted Execution Environment) prevents bypassing;
3️⃣ Monetization: Combining blockchain with NFTs to achieve model revenue traceability.

Unlike traditional watermarks, OML can still maintain control capabilities under a white box, with experiments showing an accuracy of over 97% in model distillation and parameter theft detection, with a performance loss of less than 2%. It can be said that this is a key turning point in the security governance of open models.

Interestingly, OML divides the model into two main roles: control plane and data plane.
The control plane acts like a strict regulator, managing who can invoke the model, which policies must be followed, recording every operation, and generating signed run lists and immutable audit logs;

The data plane focuses on "doing the work", processing tokens, without mixing in other tasks.

This division of labor allows the model to run locally without relying on centralized APIs, while ensuring that authorization, traceability, and auditing are fully controllable. Sentient has embedded 24,576 key-response pairs in the Llama-3.1-8B fine-tuned version, maintaining stable performance, and remains effective after fine-tuning, distillation, or mixing, truly giving AI models "signatures" and copyright protection.

Meanwhile, Sentient's LiveCodeBench Pro brings AI programming capabilities back to the real battlefield:
The pass rate for high-difficulty programming problems with AI is almost zero; from reading the problem statement, designing the solution, generating code, to compiling and executing, each step strictly follows the standards of algorithm competitions; covering authoritative competition problems such as Codeforces, ICPC, IOI, and adopting the Elo dynamic difficulty rating system; local reproduction, hidden tests, and complete log generation allow for the model's capabilities to be verifiable and traceable.

In the current pursuit of high scores and prompt techniques in generative AI, LiveCodeBench Pro serves as a clear mirror, showcasing the true boundaries of models in algorithm comprehension, long-range logic, and complexity control, making "models can write code" no longer just a hollow claim.

@SentientAGI is reshaping the standards of AI safety, controllability, and capability using OML and LiveCodeBench Pro. Open models now have copyright protection, and AI programming has a real examination environment, marking an important milestone for community-driven Open Source AGI.

#KaitoYap @KaitoAI #Yap #Sentient
View Original
post-image
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)