Rethinking AI Security in the Age of MCP: An Interview with Adi Hirschstein, VP, Product at Duality Technologies
Profile
Name: Adi Hirschstein
Position: Vice President, Product
Industry: Privacy-Preserving AI, Cybersecurity, Cryptography
Location: United States
Website: dualitytech.com
As AI becomes more autonomous and deeply integrated with enterprise systems, new infrastructure standards are emerging, like the Model Connectivity Protocol (MCP), often dubbed the “USB-C for AI.” While MCP opens up powerful pathways for AI to dynamically interact with tools, platforms, and data, it also introduces new and serious security vulnerabilities.
Adi has over 20 years of experience as an executive, product manager, and entrepreneur, driving innovation in technology companies—primarily in B2B startups focused on data and AI.Currently, Adi is the VP product management at Duality, a pioneer in privacy-enhancing technologies. Previously, he served as VP of Product at Iguazio, an MLOps company that was acquired by McKinsey. Before that, he was Director of Product at EMC, following its acquisition of Zettapoint, a database and storage startup where he led product strategy from inception to market growth as VP of Product
We spoke with Adi about the implications of MCP, the urgent need for secure-by-default AI systems, and how Duality is reimagining privacy in the era of connected intelligence
About You and Duality

1. Adi, can you walk us through your background?
I’ve spent over 20 years building and scaling products at the intersection of data, AI, and enterprise technology—mostly within B2B startups. I’m currently the VP of Product Management at Duality, where we’re pioneering privacy-enhancing technologies that enable secure data collaboration across organizations.
Before Duality, I was VP of Product at Iguazio, an MLOps company that was ultimately acquired by McKinsey. Prior to that, I led product at Zettapoint, a database and storage startup, where I helped shape the product strategy from inception through growth. After EMC acquired Zettapoint, I continued there as Director of Product.
Throughout my career, I’ve been passionate about bringing innovative, data-driven technologies to market—especially in emerging areas like AI and secure computing.

2. What is the mission behind Duality?
Duality’s mission is to help organizations unlock insights from data they previously couldn’t access, enabling improved products and services across industries. In today’s AI-driven world, we know that models perform better with access to more data. Our mission is to make that possible—empowering AI innovation while ensuring sensitive data remains secure and private.

On MCP and AI’s Expanding Attack Surface
3. MCP is being referred to as the “USB-C for AI.” What exactly does that mean, and why is it significant?
MCP brings standardization to how we work with large language models, much like how USB-C unified device connectivity. By providing a consistent interface, MCP makes it significantly easier to integrate additional tools and resources. For us as product developers, it enables the creation of context-aware applications that seamlessly incorporate the capabilities needed to achieve specific goals. For example, we can build assistant agents for researchers and provide the models with the exact context and domain-specific resources they need to deliver optimal outcomes.
4. As this infrastructure standard gains adoption, what new risks are emerging that enterprises should be aware of?
MCP-based systems can significantly expand the attack surface by linking tools, memory, functions and user context into a complex, interconnected environment. This complexity increases the risk of vulnerabilities across components. Persistent memory, if not properly managed, can unintentionally retain or expose private or regulated data across sessions or users. Furthermore, without strict controls, the system may invoke unauthorized tools, leading to unintended or potentially harmful actions.
5. Can you give an example of how MCP might unintentionally open doors to system overreach or privacy breaches?
Due to the way MCP is structured, if proper isolation between users and sessions isn’t enforced, a request for financial information by one user could inadvertently expose data belonging to another user.
Privacy, Security, and Ethical Development
6. You emphasize the need for privacy and security to be the default, not an afterthought. What does that look like in an MCP-driven world?
MCP-based systems must be designed with strict isolation between sessions and users from day one. It’s also essential to establish clear policies for tool authorization and resource access, aligned with your organization’s security and governance standards.
Equally important is ensuring strong observability and auditability across the system. In an environment where autonomous agents operate on behalf of users, organizations need full visibility into what actions are being taken, by whom, and why. This not only helps detect and respond to potential misuse or anomalies, but also builds trust in the system by making agent behavior transparent and accountable.
7. How can organizations align fast-paced AI innovation with growing data privacy regulations like GDPR or HIPAA?
Organizations must embed privacy into their AI development lifecycle, rather than treat it as an afterthought. A critical aspect of this is proactively selecting the appropriate privacy-preserving technologies. For example, should they use fully homomorphic encryption, federated learning, or confidential computing? Each approach comes with its own strengths, limitations, and trade-offs between utility and security. Evaluating and integrating these technologies should be a foundational step in any AI innovation strategy.

8. What role do encrypted AI workflows play in enabling secure collaboration across organizational or even international boundaries?
For organizations handling sensitive data, encryption offers a way to safeguard that data while leveraging AI models. For AI vendors, the dynamic is reversed—they can protect their model IP while allowing customers to securely run workloads on those models. In cross-border or cross-organization scenarios, this technology becomes essential. It enables secure collaboration while ensuring data privacy, confidentiality, and compliance with regulatory requirements.
Looking Ahead
9. As AI systems become more autonomous and interoperable, what principles should guide their secure development and deployment?
As AI systems become more autonomous and interoperable, their attack surface expands. To mitigate this, it’s essential to adopt a security-first design approach. This includes integrating privacy-preserving technologies to protect both data and models, limiting access to tools and resources, clearly defining the actions AI is allowed to take, enforcing isolation between sessions and users and implementing comprehensive monitoring and auditing mechanisms
10. What’s next for Duality in leading the charge on privacy-preserving AI? And what advice would you give to companies preparing to adopt these next-gen AI systems?
We envision a world where autonomous agents handle tasks that were once performed manually or through traditional applications. In this future, Duality will play a critical role in safeguarding sensitive data and models used by these agents through advanced privacy-preserving technologies. The key advice I would offer to companies is to adopt security-first design principles from day one because retrofitting privacy and security after deployment is not only difficult but often ineffective.
Closing:
Thank you, Adi, for the insightful conversation. As AI continues to evolve from isolated models to highly connected, dynamic systems, the need for secure, privacy-preserving infrastructure is no longer optional—it’s foundational. Duality Technologies’ work proves that we can innovate boldly while protecting what matters most: our data, our systems, and our trust.
