Hey folks 🎉
my computer is set up in my new flats and I’m hunting for a new couch and other furniture. But that doesn’t stop us from some amazing machine learning!
ConvNets for the 2020s is a new paper that shows little tweaks to CNNs to outperform transformers without using attention!
I love improving science communication. This Latex package enables annotating equations within the document.
PDFs about data science interviews are popular on Linkedin, now a book about deep learning interviews has made it to Arxiv for free.
I finally got my things from the UK and have internet access in my new flat in Germany! So now I have some time to recover from that stressful time.
Finally, I can share with you that I was selected as one of the 2022 fellows for the Software Sustainability Institute. Here’s the official announcement. I’m joining the ranks of some truly inspirational people and I can’t wait.
My Youtube has reached 1000 subscribers! 🎉 Incredible honestly, I did not think this would ever happen, and it feels pretty amazing. Glad to have so many people along for the journey.
I have also re-joined the ship30 for 30 course for the January cohort. I have been writing a Twitter thread about machine learning each day. This has been tremendous fun.
I’ve started reading again. I’ve been enjoying some Terry Pratchet on my new kindle Paperwhite, the backlight is such a gamechanger.
These start off as Twitter threads, if you want to check those out, click the little birdy 🐦!
Let’s start off with the announcement of my Sustainable Software Institute fellowship announcement post. 🐦 I also posted my application video and you can tell I was nervous.
I wrote about how worldwide Trillions of Dollars in Value Are Lost in PDFs 🐦
Two articles are treating the issue of starting out. How do you gauge the difficulty of a new machine learning project? 🐦 Alternatively, if you want to start your first machine learning project, here are 3 websites that are perfect to get you off the ground. 🐦
Is data science and AI dead? I don’t think so and here’s why. 🐦
You’re using Kaggle wrong and here’s how you get the most out of the platform. 🐦
The final article is a long one. I finally got around writing up some thoughts about the L1 and L2 norm, why they’re used as loss functions in ML and physics. To round it off I share a personal recommendation from a Stanford professor. This one is long, I recommend not reading it on Twitter, but regardless: 🐦
Pydata Global published the talks on Youtube so you can now watch: How to Guarantee No One Understands Your ML Project.
How do you make a machine learning model robust to outliers?
Post them on Twitter and Tag me. I'd love to see what you come up with. Then I can include them in the next issue!
Can you make fire with a broken empty lighter?
I watched the short documentary Holy Ghost about the change of singers in the band Architects after tragedy struck. It’s sad but beautiful (CW: death).
I did not know Union Busting was this prevalent in the US, not until John Oliver made a show about it on Last Week Tonight.