Diving Deep into Privacy-Preserving Machine Learning
Hi everyone,
Hope you're having a good weekend. First, a channel update, and then we'll talk about privacy in machine learning. At the end of the email, I'll talk about upcoming videos.
Channel update
We've rocketed past 100 subs, thanks to you all and a gracious tweet by Neil Lawrence! At the time of writing we're at 162 subs - well up on the 43 subs at the start of the month.
To those of you that are commenting, thank you for your video suggestions! There are many "big" topics that I've got to get round to, and I've noted the suggestions down.
Privacy
I've uploaded two more videos on privacy. I took a different approach these last couple of videos, diving deep into individual papers:
-
PATE: one of the fundamental privacy-preserving ML algorithms out there: a must-know if learning about differential privacy.
-
CaPC: a new paper coming out at ICLR 2021 that ensures both privacy and confidentiality. This paper caught my eye as it shows how you can combine the techniques in our privacy-preserving toolkit.
Over the next month, we'll move away from privacy and look at other areas of trustworthy machine learning.
Future videos
The next video is on gender bias in Google Translate, which is somewhat topical right now. Here's a sneak peek of that bias in action.
Next month we'll have introductory videos for explainability and for adversarial examples (similar in style to the federated learning video). What questions do you have about these topics that you'd like to see answered? Reply with your questions and I'll incorporate them into the video!
Till next time,
Mukul