Data safe havens, federated learning and other privacy-enhancing techniques

In order to answer questions like who is at higher risk of dying from Covid, how to improve treatment strategy or what are effective interventions for mental health. This requires to develop models and access data in way that minimises the risk of disclosing sensitive data. There is a trade-off between locking the data away (or not collecting it systematically in the first place) — keeping it safe at the cost of not extracting the beneficial knowledge of the data.

This projects aims at reviewing state of the art and consider different reference implementations. Possibly contributing to the research project SEMLA: A SEcure Machine Learning Architecture.

References

Sebastian Vollmer
Sebastian Vollmer
Professor for Applications of Machine Learning

My research interests lie at the interface of applied probability, statistical inference and machine learning.