From Trust to Proof: An Interview with Jason Teutsch, Founder & Chief Scientist at Truebit

As AI becomes increasingly integrated into critical sectors like healthcare, finance, and governance, concerns around its “black box” nature continue to grow. When decisions with real-world consequences are made without transparency or accountability, public trust erodes—and the promise of AI is undermined.
Jason Teutsch, founder of Truebit, believes it doesn’t have to be this way. By shifting AI systems from models built on trust to those grounded in verifiable proof, we can build technologies that are not only powerful but also accountable.
We sat down with Jason to explore how referenceable training data, cryptographic decision trails, and verifiable computation can change the future of AI.
Interview Questions
About You and Truebit
1. Jason, can you tell us a bit about your background and what led you to found Truebit?
My research in mathematics and computer science included recursion theory and security, and I had the opportunity to combine game-theoretic elements from both as I opened the field of cryptoeconomics, I published the first peer-reviewed paper on Ethereum (CCS 2015) months before Ethereum’s mainnet launch. You could say that before there was Layer 2, there was Truebit. Indeed, every optimistic rollup today relies on the Truebit-style verification which we introduced at that time. As Ethereum’s first scaling solution, Truebit focused on providing smart contracts with computational superpowers as opposed to increasing transaction throughput.
2. What problem are you most motivated to solve with Truebit?
We identified several core problems in the blockchain space. First, there are significant constraints with on-chain compute – in reality the vast majority of typical application code runs completely offchain because blockchains are just one layer of the architecture needed to build an app.
Second, there was no framework for integration and composability. Creating applications that operate across different blockchains and access data from the Web2 world was nearly impossible.
Third, decentralization was a myth. Highly centralized cloud computing platforms ended up hosting the overwhelming majority of application code.
We needed a way to provide data provenance – showing how data was processed, when it was processed, and where it came from. This is particularly important in decentralized spaces where you can’t rely on reputation and need other forms of proof.
The Black Box Problem
3. Why is the lack of transparency in AI systems such a big issue, especially in high-stakes industries like healthcare or finance?
When AI systems make decisions that significantly impact people’s lives – whether diagnosing diseases or approving loans – we need to understand how those decisions are reached.
The black box nature of many AI systems becomes particularly problematic in areas like healthcare and finance because the stakes are so high. If an AI recommends a treatment or denies a loan, one desires as much certainty and evidence as possible. Without transparency, it’s nearly impossible to detect bias, identify errors, or assign responsibility when things go wrong.
When something goes wrong, we often find ourselves trying to audit after the fact – essentially litigating problems years later rather than preventing them through proper verification. As I like to say, an ounce of prevention is worth a pound of cure.
The question shouldn’t be “why should I do verification?” but rather “why aren’t you doing verification?” That’s ultimately where we need to get to.
4. Do you think the general public is fully aware of the risks posed by opaque AI systems?
I don’t believe the general public fully grasps the risks yet. The paradigm of what constitutes “proof” is changing rapidly. In the 1980s, a Polaroid photo was considered solid evidence. Today, an image isn’t worth very much as proof. We need different tools now.
The public discussion focuses on sensationalised scenarios like job displacement or superintelligence, but misses the fundamental computational integrity problem we’re facing right now.
What’s particularly concerning is that as AI becomes more embedded in everyday applications, people become comfortable with this “black box” back office. They think, “Well, my apps work fine, so AI must be trustworthy.” But there’s a significant difference between accepting a bad movie or recipe recommendation and having, for example, unverified AI systems control financial markets or healthcare decisions – where the public generally shares implied trust in institutions.
Even with the rise of explainable AI techniques, we must be cautious about creating an “illusion of understanding” where explanations are superficial or don’t truly reflect the model’s core reasoning.
Truebit’s Approach
5. You mention auditable transcripts. Can you break down what these are and how they work?
Truebit’s functionality overlaps with cryptographic proofs like SNARKs, but our “transcript” proofs are a little more descriptive.
A Truebit transcript is an augmented certificate that chronicles code execution, documenting what was executed and when, inputs and outputs, identifiers for each party that touches data or code, and various annotations. These transcripts enable universal consensus with respect to data origin and processing and data provenance.
Our platform uses a server called the Hub whose operation combines elements of blockchain and cloud. If a compromised Hub were to deviate from the prescribed protocol, the corresponding transcript would witness their erratic behavior as a smoking gun.
This creates accountability without requiring users to trust a single party. If something goes wrong, there’s proof of exactly where and what happened, enabling real-time mitigation.

6. What does it mean to move from “trust-based” to “proof-based” AI in practical terms?
Moving from trust-based to proof-based AI means shifting from systems where we inherently trust the entities operating them to systems where transparency and verification are built in from the ground up.
In a trust-based paradigm, we rely on the reputation of developers or platforms and accept outputs based on faith in the system’s design without independent, continuous checks. In contrast, a proof-based paradigm demands explicit, verifiable evidence that an AI computation was executed correctly according to specified programs and data.
This creates auditable records for every significant AI decision, verifiable data provenance to confirm sources, and algorithmic integrity to prove that the specified model was actually executed as expected.
Transparent systems complement open source code. While open source provides guarantees for code running on local machines, code running elsewhere requires transparency to achieve a similar effect. An entity can publicize a code repository while running a different version on its server.
Fundamentally, invariants are a basis for trust When someone asks for trust, they should be able to demonstrate invariants.
Ethics, Regulation, and Impact
7. How do these technologies help organizations stay compliant with ethical standards and regulations?
Our verification technology acts as a pre-audit rather than a post-incident investigation. When every computation carries a certificate of integrity, organizations can prove compliance with regulations in real-time rather than piecing together evidence after something has gone wrong.. What we should be doing is pre-auditing systems so that problems don’t happen, versus coming back years later to litigate issues that could have been avoided.
Verifiable computation strengthens accountability by making it easier to pinpoint where issues originate if an AI system behaves unexpectedly. Was it an error in execution, flawed input data, or a problem with the model’s design?
It also improves data governance by offering proof of data lineage, confirming precisely what data was used for a specific AI decision. This is invaluable for complying with regulations concerning data quality, managing bias, and respecting data usage rights.
A significant advantage is the potential to move towards “continuous compliance” rather than periodic manual audits. The ability to generate verifiable transcripts allows compliance to become an ongoing, embedded process, dramatically reducing overhead while providing stronger assurances.
8. Are policymakers and regulators ready to understand and adopt these kinds of technical safeguards?
There’s growing recognition among policymakers and regulators that verification is necessary, but there’s still a significant gap in understanding the technical solutions available.
We’re seeing a shift in standards. For example, the government decided that previous identity verification standards weren’t sufficient, leading to initiatives like Real ID. These standards continue to evolve as technology advances and risks change.
The challenge is that verification technology is developing rapidly, while regulatory frameworks tend to move more slowly. What we need is more dialogue between technologists who understand these systems and policymakers who can incorporate them into meaningful regulations.
Adoption will likely be a gradual process requiring standardization of what constitutes adequate proof of computation, capacity building within regulatory bodies, and continuous collaboration between technologists, industry players, researchers, and policymakers.
Initially, we might see adoption driven by highly regulated sectors like finance and healthcare, where verifiable trust is non-negotiable. This could lead to AI systems that offer verifiable assurances being distinguished from those that don’t, potentially driving broader adoption as benefits become more widely recognized.
Looking Ahead
9. What’s your vision for the future of verifiable AI, and what role will Truebit play in it?
My vision is for a verified society where trust is embedded in every digital interaction and every computation is verifiable.
I foresee critical AI-driven decisions being routinely accompanied by verifiable proofs of their computational integrity and data lineage, much like financial statements are audited today. The “black box” of AI will transform into a “traceable box,” where the what and how of its computations are open to scrutiny when necessary.
I see Truebit as a counterweight to the AI movement; we need tools which allow us to sort through increasing amounts of noise Truebit’s role is to be a key provider of the verification layer for the trustless economy, extending this capability into AI. We aim to empower developers with tools to build verifiable AI applications, enable “traceable AI” with transparent footprints, bridge blockchain and real-world data sources, and pioneer standards for transparent verification.
Looking forward, we’re working on privacy enhancements, ease of use improvements, handling larger files for AI processing, and supporting more programming languages. We’re building toward a future where every computational operation is a verifiable asset contributing to a more transparent and trustworthy digital economy.
10. Lastly, what advice would you give to entrepreneurs and technologists working to make AI more trustworthy and transparent?
Think about verification end-to-end. Your verification is only as good as the weakest link in your supply chain.
Embrace “proof over trust.” Challenge assumptions about where trust is being placed in your systems. Ask “how can this be proven?” and”why should we trust this?”
Prioritize data integrity and provenance. The trustworthiness of an AI system depends on the quality and integrity of its data. Verification of data inputs is just as critical as verification of the computation itself.
Finally, recognize that the ability to prove the trustworthiness of your AI solutions can be a significant competitive advantage. As awareness of AI risks grows, users and businesses will seek out solutions that offer verifiable assurances. “Provable trust” is becoming a market differentiator.
Closing:
Thank you, Jason, for the thoughtful insights. In an era where AI plays a growing role in shaping outcomes that affect us all, it’s reassuring to know that solutions exist to make these systems more transparent, auditable, and ultimately, fair. Truebit’s work reminds us that we don’t have to choose between innovation and accountability—we can have both.
