Next-Generation Secure Computing Systems

Quantum-Driven Intelligent and Secure Systems

Our research explores the rapidly evolving frontier where quantum computing, machine learning, blockchain, and cybersecurity converge. As quantum technologies progress from theoretical constructs to practical tools, they open the door to extraordinary computational capabilities—alongside new forms of risk that challenge today’s digital infrastructure. This project investigates how quantum algorithms can not only accelerate and strengthen machine learning models but also reshape the way intelligent systems learn, generalize, and adapt in uncertain environments.

At the same time, we examine how quantum advantage may disrupt the cryptographic foundations of blockchain networks, exposing structural vulnerabilities in systems designed for decentralization and trustlessness. By identifying these quantum-induced threats, we aim to develop innovative countermeasures, such as quantum-resistant cryptography, post-quantum consensus mechanisms, and adaptive security protocols capable of defending against both classical and quantum adversaries.

Beyond these core areas, the project also explores broader implications for secure distributed computing, quantum-enhanced data privacy, and the governance challenges that arise when emerging technologies intersect. By integrating insights from physics, computer science, and security engineering, we seek to establish a comprehensive framework for resilient next-generation networks—systems capable of thriving in an era defined by quantum intelligence, autonomous decision-making, and decentralized digital ecosystems.

Fully Homomorphic Encryption Compilers

Fully Homomorphic Encryption (FHE) compilers offer a transformative approach to secure computation by enabling data to remain encrypted throughout the entire processing pipeline. These systems automatically translate conventional programs into encrypted equivalents, allowing complex operations to be performed without ever exposing the underlying information. By removing the need for users to engage directly with intricate cryptographic machinery, FHE compilers make secure data outsourcing practical for a wide range of real-world scenarios, from cloud computing to collaborative analytics.

As interest in privacy-preserving technologies grows, FHE compilers sit at the center of several emerging challenges and opportunities. They raise important questions about how to balance computational efficiency with strong privacy guarantees, how to support increasingly sophisticated applications, and how to ensure that encrypted computations remain both correct and performant. In addition, the rise of distributed and data-intensive environments highlights the need for scalable solutions that can handle diverse workloads while maintaining robust security protections.

By abstracting away much of the complexity behind encrypted computation, FHE compilers represent a critical step toward more trustworthy and privacy-aware systems. Their continued development has the potential to reshape how sensitive data is handled across industry, research, and government, paving the way for secure computation to become a foundational component of modern digital infrastructure.

Next-Generation Intelligent Systems

Generative AI & Explainable AI

Generative AI and Explainable AI (XAI) have emerged as two influential pillars in the advancement of modern artificial intelligence. Generative AI enables systems to produce new instances of data—ranging from text and images to more abstract representations—while XAI focuses on illuminating how AI models arrive at their decisions. Although these domains have evolved largely in parallel, they intersect on a critical issue: the need to understand and trust increasingly complex AI systems.

Many high-performing models function as black boxes, offering impressive outputs without revealing the reasoning behind them. This opacity introduces several challenges. It complicates the evaluation of model reliability, hinders our ability to detect errors or biases, and raises important concerns regarding safety and security. As AI systems become more deeply embedded in real-world applications, these challenges grow more pressing, emphasizing the need for methods that improve transparency without sacrificing performance.

We explore broad strategies for bringing the strengths of Generative AI and Explainable AI into closer alignment. Rather than treating these fields as separate pursuits, the work examines how insights from each can inform the development of more transparent and trustworthy AI systems. The emphasis is on understanding the conditions that allow models to be both expressive and interpretable, and on identifying high-level principles that support clearer, more reliable behavior without compromising overall performance.

Ultimately, this research seeks to contribute to a broader vision for the future of artificial intelligence—one in which models are both powerful and understandable, and where transparency becomes a built-in characteristic of systems deployed across sensitive or high-impact domains. By advancing general approaches that promote clarity and safety, the project aspires to help pave the way toward AI technologies that are dependable, responsible, and aligned with human needs.

Knowledge-Guided LLM Reasoning

Large language models (LLMs) increasingly rely on structured knowledge bases that link entities through rich semantic relationships. These interconnected sources provide context, constraints, and relational patterns that guide how models interpret information and make predictions. Our research examines how LLMs reason over such structured knowledge—especially when answering queries that require understanding entity types, relationships, and logical dependencies, as illustrated in the figure.

We investigate the full pipeline of knowledge-guided LLM reasoning: how models retrieve relevant information from a knowledge graph, filter and re-rank candidate answers using additional facts, and apply logical constraints to ensure that the final predictions align with real-world semantics. The example shown demonstrates how an LLM must differentiate between entities like Tesla, the Nobel Prize, and Time Person of the Year when given context about Elon Musk and the “won” relation. Such scenarios highlight a deeper challenge: maintaining accuracy and consistency when structured knowledge evolves or when multiple related facts must be synthesized.

Our work explores how LLMs internalize these relational structures, how inconsistencies propagate when the knowledge graph changes, and how to design mechanisms that help models adapt while preserving logical coherence. By uncovering the principles behind robust, knowledge-aware reasoning, we aim to build language models that remain reliable, interpretable, and resilient in dynamic, interconnected knowledge environments.

Security and Privacy Issues in Artificial Intelligence

Trustworthy Diffusion Systems

Diffusion models have rapidly emerged as one of the leading techniques for generating high-quality synthetic images, operating by gradually transforming random noise into coherent visual content. Their ability to produce realistic, diverse, and fine-grained outputs has made them central to many creative and practical applications—from digital art and content generation to data augmentation and design automation. As these models become more accessible and more capable, they continue to reshape how visual media is produced and consumed.

However, the growing power and availability of diffusion-based image generation also bring important societal and security considerations. These models can be misused to create highly convincing deepfakes, generate harmful or unregulated Not Safe For Work (NSFW) content, or replicate copyrighted material in ways that challenge existing legal and ethical frameworks. Their behavior is shaped by the data on which they are trained, raising additional concerns about bias, provenance, and the potential for misuse by malicious actors.

Understanding the broader implications of diffusion models has therefore become an essential part of AI research. Beyond their impressive creative potential, they prompt critical questions about safety, accountability, and the responsible development of generative technologies. As these systems continue to evolve, ongoing study is necessary to ensure they are deployed in ways that are trustworthy, ethical, and aligned with societal values.

Vulnerabilities in Graph Neural Networks

Training Graph Neural Networks (GNNs) in distributed environments such as Federated Learning (FL) has gained considerable traction as these models continue to show strong performance across a variety of application areas. Yet, operating in distributed settings introduces important security concerns: models may be susceptible to adversarial influences, and sensitive information held by individual clients or devices may be exposed through unintended inference. Because Federated Graph Neural Networks (FedGNNs) are still emerging, many of their potential vulnerabilities—as well as strategies for safeguarding them—remain insufficiently understood.

We focus on examining the security landscape surrounding FedGNNs from a broad perspective. This includes understanding how the unique characteristics of graph-structured data and the distributed nature of FL protocols interact to create new risks. The work also seeks to outline general principles for strengthening these systems, emphasizing approaches that enhance reliability, robustness, and trust without relying on any single defense technique. By clarifying the challenges and identifying foundational strategies for mitigation, this research aims to contribute to the development of secure and dependable distributed graph learning frameworks.