Blockchain-Secured Knowledge Sharing Solves AI Hallucinations and Boosts LLM Reliability
Researchers developed BLOCKS, a blockchain framework that securely shares verified knowledge across silos to improve large language models’ accuracy. It uses reputation and cross-validation to ensure trustworthy AI responses.

Artificial Intelligence, Quantum Security Blockchain and LLMs: Secure Knowledge Sharing for Reliable AI Responses
Large language models (LLMs) often struggle with generating accurate information, sometimes producing factually wrong or nonsensical outputs—a problem known as “hallucination.” One key solution is giving these models reliable access to verified external knowledge. However, valuable data is frequently locked away in isolated systems, protected by privacy and security rules that make integration challenging.
A team of researchers from Shanghai Jiao Tong University and China Telecom Research Institute has developed a blockchain-based framework called BLOCKS to tackle this problem. Their system enables secure and efficient sharing of knowledge across separate silos while encouraging participation through a reputation and cross-validation mechanism.
Addressing Fragmented Knowledge with Blockchain
Knowledge fragmentation, especially when data is siloed and protected, limits the ability of LLMs to access trustworthy information. BLOCKS uses blockchain technology to coordinate data sharing between different systems without compromising security. Local data is distilled into concise prompts, and every transaction is securely recorded on the blockchain to create a verifiable trail of knowledge retrieval.
This approach strengthens trust in the information feeding the LLM by ensuring that the data sources and the retrieval process are transparent and tamper-proof.
Incentivizing Quality Through Reputation and Validation
To maintain high-quality knowledge sharing, BLOCKS integrates a reputation system that scores contributors based on the reliability of their data. This scoring directly affects the trustworthiness of knowledge presented to the AI model.
Cross-validation further improves accuracy by comparing information from independent sources, filtering out inconsistencies, and reducing misinformation risks. This two-pronged approach helps ensure that only credible and verified knowledge shapes the AI's responses.
Streamlined Knowledge Access via API
The framework also includes a sophisticated query generation system with an application programming interface (API) that simplifies retrieving and integrating external data. This API allows developers and systems to efficiently request and consume knowledge distilled from multiple silos.
Experiments show that BLOCKS facilitates effective knowledge sharing within the secure blockchain environment, significantly improving the accuracy and reliability of LLM outputs.
Building on Proven Technologies
The design of BLOCKS draws on research in blockchain interoperability, reputation systems, and LLM prompting methods. For example, studies like Belchior et al. (2021) offer insights into integrating diverse knowledge sources across different blockchain networks, overcoming compatibility challenges.
Reputation systems researched by Hu et al. (2018) inform the incentive mechanisms that encourage contributors to share accurate data. Advances in LLM prompting techniques ensure the distilled knowledge is translated into formats that maximize information transfer to language models.
Looking Ahead: Scaling and Integration
Future work will focus on scaling BLOCKS to handle larger datasets and more complex knowledge areas. Researchers will explore alternative consensus mechanisms—the methods blockchains use to agree on data validity—to improve performance and security.
Plans also include developing user-friendly interfaces to ease contribution and validation processes, fostering a broader community of contributors, and ensuring the framework's sustainability.
Ultimately, this research supports creating AI systems that provide responses grounded in verified, external knowledge, making AI applications more trustworthy and informative.