Martin Lopatka

Executive Director, Data & AI at Valtech

Founding Member
Individual

Martin Lopatka is a technology leader and applied researcher working at the intersection of machine learning, AI ethics, and technology policy, with a focus on moving responsible AI from principle to practice in complex organizations. He specializes in turning ambiguous, high-stakes AI problems into structured, auditable systems that deliver measurable value while meeting safety, privacy, and governance requirements at scale.

Technical and research focus

Lopatka’s technical background spans machine learning, applied statistics, large-scale telemetry, and probabilistic reasoning, grounded in a Ph.D. in forensic statistics from the University of Amsterdam and an M.Sc. in Artificial Intelligence. His publications include work on trustworthy AI, privacy-preserving analytics, browser telemetry analysis, and the statistical interpretation of complex forensic evidence, reflecting a career-long concern with inference under uncertainty and the societal impact of data-driven systems. He has led teams building production ML systems for web-scale environments, including Firefox and web-crawl platforms, with an emphasis on robust experimentation, causal reasoning, and rigorous measurement.


Responsible AI and policy engagement

In recent years, Martin has focused on responsible AI, serving as a thought leader and offering owner for Responsible AI assessment frameworks and enterprise security for LLM-based applications. His work includes designing assessment approaches that align technical controls with regulatory expectations and organizational risk postures, and leading pods that integrate model governance, red teaming, and human-in-the-loop review into AI delivery pipelines. Through collaboration with cross-functional partners, he helps clients operationalize AI principles such as transparency, fairness, and accountability into concrete design decisions, documentation practices, and deployment standards.

Bridge between research, practice, and governance

Across roles in consulting, product organizations, and forensic research, Martin has operated as a bridge between deep technical work, organizational strategy, and public-interest concerns around technology. He has contributed to initiatives examining dark patterns, platform governance, and the civic implications of data infrastructures, and has helped design cultures of experimentation that normalize rigorous testing while reducing fear of failure. This combination of research literacy, hands-on engineering experience, and policy-aware framing makes him a trusted partner for organizations seeking to deploy advanced AI systems that are not only performant, but also accountable, resilient, and aligned with broader societal expectations.

Martin maintains an active role in the Privacy Enhancing Technologies (PET) community, the Mozilla Alumni Network, and the UBC Cognitive Systems Alumni Network and mentoring team.