Hey folks π
the giveaway is done. I've had some time off. Life's good! Here's some machine learning!
Deepmind made the news publishing a collaborative effort on nowcasting with the Met Office.
The FDA has cleared the first machine learning system to spot cancer.
Since we're so deep in the physical realm today, here's a fantastic Python library that can help you with your meters, feet, Celsius and Kelvin. (Seriously it's very good.)
I visited London over the weekend and met all of my ex-colleagues and some friends. It was a blast. One friend organised a clue hunt through London. You get clues via SMS and when you solved them in a location it will send you on to the next. It's a very mindful way to explore a city. Would highly recommend!
I had a lovely weekend in London.
β Jesper Dr.amsch (@JesperDramsch) October 1, 2021
I also made this and I think it’s very cool.
(Click to see the full panorama.) pic.twitter.com/7q0OdmxrF4
I also spent days trying to fix an obscure SSL error on my computer. Nothing like not being able to communicate to any APIs, work machines, or even pip install anything.
I decided my new course will be on "financial modelling" or rather price prediction using different statistical and machine learning models.
Now that even I dare to go outside again. Sometimes. I realized that phones really don't hold a charge, especially under heavy use. I was very thankful for my trusty old power bank when I was out and about.
It's done! I sent out all the emails to the winners!
Please check your spam, because there might be a prize waiting for 20 of you! The winners are also announced on the website, first name only of course: dramsch.net/giveaway/
If you post about it anywhere, make sure to tag me, so I can see and possibly share!
I published a video about getting job experience when no one is hiring you without job experience. I have a blog post that goes with it if you're more of the reading type!
Weather forecasting is an interesting beast.
Numerical weather prediction models have become quite good at predicting the weather days ahead. However, high-resolution forecasts of the immediate future, i.e. up to a few hours ahead are still mostly unsolved. This discipline is called nowcasting.
I actually worked on a nowcasting project in the HYMS project, where we were testing a novel microwave sensor. Deepmind have now published a paper that uses that classical radar data.
The data we deal with for nowcasting is time-steps of maps with radar measurements. Usually, every five minutes we obtain a full map of the precipitation in a certain area. The goal then is to use a few of these time-steps to predict a few hours out. Deepmind specifically uses 20 minutes of recorded data (4 steps) to predict 90 minutes (18 steps).
How novel is their approach?
People have thrown machine learning at now-casting as soon as it was available. Let's look through a few approaches:
CNN: You know I love a good U-Net. Due to the fixed step size, we can easily predict a stack of outputs [1]. In our work, this approach was doing pretty good, after changing the loss function to a more appropriate one than Mean Squared Error.
LSTM: There's a time component, so of course someone will try an LSTM [1]. You can also pair it with a convolutional layer to get a Conv-LSTM [2]. These work pretty alright, but don't scale particularly well, as LSTMs do.
Generative Models: Since we're trying to generate data, a generative model seems appropriate. These learn the distribution that generates the type of data we're looking at. Generative Adversarial models worked particularly well, since they "learn" a loss function and aren't strictly bound to the assumptions of the mean squared error.
The Deepmind paper sits right in that generative model space. Prior work has been using conditional Generative Adversarial Networks, however, Deepmind came up with a clever approach.
Lending an idea from video generation [3], they use two discriminative networks, i.e. two loss functions! One in the spatial domain, to ensure individual precipitation maps do well and one in the temporal domain (essentially a 3D CNN) that ensure temporal consistency! Isn't that neat?
We love some good regularization. It makes out neural networks work after all. In this case, the final term to the Deepmind solution, penalizes deviations of the model on a grid-cell level. It calculates the mean of multiple model predictions and compares it to the real data.
I can imagine that the regularization term would be very tricky to work with. In many cases, it would lead to extreme overfitting.
Finally, in addition to this neat model architecture, Deepmind introduced a neat trick that will be particularly useful for the GAN aficionados. GANs generate the data from a latent distribution, which is usually a fancy term for a vector of random data that is the input to the generative neural network.
How do you ensure that the probabilities we draw from are spatially dependent, as rain would be? We integrate over the latent vectors!
There are a lot of small tweaks and real gems in the paper. I highly recommend a read.
Send me your answers or post them on Twitter and Tag me. I'd love to see what you come up with. Then I can include them in the next issue!
Veritasium explores how hidden technology transformed bowling. I'm not even THAT into bowling and this was incredibly interesting!
You put your blood and sweat into an application, only for it to get rejected on the day the application closes. ATS are to blame, here's a fun exploration of different approaches to writing a resumΓ©.
I'm trying to become better at writing online and generally produce interesting things. I found this article extremely insightful on modern digital writing.