💥 Gate Square Event: #PostToWinFLK 💥
Post original content on Gate Square related to FLK, the HODLer Airdrop, or Launchpool, and get a chance to share 200 FLK rewards!
📅 Event Period: Oct 15, 2025, 10:00 – Oct 24, 2025, 16:00 UTC
📌 Related Campaigns:
HODLer Airdrop 👉 https://www.gate.com/announcements/article/47573
Launchpool 👉 https://www.gate.com/announcements/article/47592
FLK Campaign Collection 👉 https://www.gate.com/announcements/article/47586
📌 How to Participate:
1️⃣ Post original content related to FLK or one of the above campaigns (HODLer Airdrop / Launchpool).
2️⃣ Content mu
Recently, the @SentientAGI team submitted another hardcore paper at the Lock-LLMs workshop of NeurIPS 2025 - the paper titled "OML: Cryptographic Primitives for Verifiable Control in Open-Weight LLMs" proposes a new method for verifiable control of open-source large models: OML (Open Model License / Ownership Marking Layer).
The core highlights are very intuitive: embedding control logic into the model inference pipeline allows open source models to run safely and verifiably. Its three-layer design is impressive:
1️⃣ Verifiability: Zero-knowledge proofs ensure the legality of each call;
2️⃣ Mandatory: TEE (Trusted Execution Environment) prevents bypassing;
3️⃣ Monetization: Combining blockchain with NFTs to achieve model revenue traceability.
Unlike traditional watermarks, OML can still maintain control capabilities under a white box, with experiments showing an accuracy of over 97% in model distillation and parameter theft detection, with a performance loss of less than 2%. It can be said that this is a key turning point in the security governance of open models.
Interestingly, OML divides the model into two main roles: control plane and data plane.
The control plane acts like a strict regulator, managing who can invoke the model, which policies must be followed, recording every operation, and generating signed run lists and immutable audit logs;
The data plane focuses on "doing the work", processing tokens, without mixing in other tasks.
This division of labor allows the model to run locally without relying on centralized APIs, while ensuring that authorization, traceability, and auditing are fully controllable. Sentient has embedded 24,576 key-response pairs in the Llama-3.1-8B fine-tuned version, maintaining stable performance, and remains effective after fine-tuning, distillation, or mixing, truly giving AI models "signatures" and copyright protection.
Meanwhile, Sentient's LiveCodeBench Pro brings AI programming capabilities back to the real battlefield:
The pass rate for high-difficulty programming problems with AI is almost zero; from reading the problem statement, designing the solution, generating code, to compiling and executing, each step strictly follows the standards of algorithm competitions; covering authoritative competition problems such as Codeforces, ICPC, IOI, and adopting the Elo dynamic difficulty rating system; local reproduction, hidden tests, and complete log generation allow for the model's capabilities to be verifiable and traceable.
In the current pursuit of high scores and prompt techniques in generative AI, LiveCodeBench Pro serves as a clear mirror, showcasing the true boundaries of models in algorithm comprehension, long-range logic, and complexity control, making "models can write code" no longer just a hollow claim.
@SentientAGI is reshaping the standards of AI safety, controllability, and capability using OML and LiveCodeBench Pro. Open models now have copyright protection, and AI programming has a real examination environment, marking an important milestone for community-driven Open Source AGI.
#KaitoYap @KaitoAI #Yap #Sentient