Skip to content

TALKS

Teams building with AI are often presented with a false choice: share data to get frontier models, or protect privacy and accept weaker results. It is a convenient story, especially for anyone who benefits from accessing your data, but it is not the full story.

This session introduces a counterintuitive paradigm where AI models can improve without ever collecting your raw data, and where organizations can collaborate without giving up control. By combining Federated Learning, the idea of training locally while learning globally, with cryptographic computation on encrypted updates using Homomorphic Encryption, the result is a system that treats privacy not as a policy or a feature, but as a structural property by design.

Through practical examples, this session explores why centralized training creates hidden constraints like security exposure, compliance friction, and data gravity that limit real-world adoption. You will learn how Federated Learning flips the classic “bring data to the code” approach, and how Fully Homomorphic Encryption closes the final subtle leak: what model updates can reveal, even when your valuable data never leaves your yard.
César Soto Valero
SEB Group
César Soto Valero is currently a Data Scientist at SEB Group. He has experience building and maintaining AI/ML systems serving millions of customers in real-time. César earned a PhD in Computer Science from KTH Royal Institute of Technology, where his research focused on inventing novel low-level program analysis techniques to mitigate software bloat and improve the efficiency, security, and maintainability of large codebases. With a background spanning both academic research and industrial engineering, César bridges the gap between theory and practice by translating research breakthroughs into robust, production-grade software systems. He is passionate about engineering, AI, and the practical challenges of deploying scalable, reliable software solutions that serve real user needs.