Excited! 1: Deep Learning Neural Networks
Deep Learning Neural Networks
One thing that I've been tremendously! excited! about lately is a kind of machine learning called "deep learning," ...or "deep learning neural networks." Artificial neural networks are computer programs that simulate simplified versions of our own mammal brains to train computers to recognize certain patterns. With deep learning, it turns out (surprise!) that feeding computers a massive dataset of training examples often works better than painstakingly writing decision trees to tell the computers to do one thing or another specifically. After tuning themselves to the particular dataset, the neural network can infer what to do, ...even for cases you haven’t explicitly told them how to handle.
Deep learning generally works really well for narrowly defined fields. It’s how Google can allow you to search your photos for pictures of cats, how good speech recognition can successfully understand accented language, how Netflix is working to improve their recommendations based on what you like, ...even how some self-driving cars are learning to drive!
Fascinating, maaaaybe a little creepy, okay, but what’s going ON here? Is it magic? Are neural networks using actual neurons harvested from loveable woodland creatures? Is Skynet about to come online? (Nope. Eww, no. Ha, not close.) For a gentle explainer, let’s start with a six-minute video by Nat & Lo of Google:
https://www.youtube.com/watch?v=bHvf7Tagt18
Moving beyond this level of understanding tends to assume that you can fill in big conceptual gaps yourself, and quickly dives into a lot of math. Memo Atken’s look at #deepdream has a good high-level, math-free explainer; Adam Harley’s illustrated essay begins to methodically piece together the math, but is very much a work-in-progress; Wikipedia provides the usual rabbit hole; and there are full free online courses for machine learning (Goldsmiths & Kadenze) and deep learning (Google & Udacity).
All that said, why get so! excited! about something so difficult to wrap my brain around?
- Neural networks are getting better and better at classification of inputs—images, sounds, gestures, stock prices—and as both a software designer and someone who works with museums, this has great potential for helping assist sense-making and curation work to make more personalized experiences for people.
- Machine learning provides a way for non-technical people to train computers instead of program them directly. (Gene Kogan is exploring some of the artistic possibilities in an NYU ITP course this semester that sounds utterly fascinating.)
- Machine learning and deep learning are things you can play with today. Google's open-sourced their TensorFlow library, and there are plenty of other options for everything from iOS to JavaScript. Choose an open source library, find a large enough dataset, and go to town.
What do you think? Have you played with machine learning or neural nets? Have some better explanations to recommend? How might some form of machine learning help you? Reply, or drop me a line on twitter.
(And thanks for subscribing!)
Many thanks to rstevens for early feedback on this tinyletter.
January 2016 has been BUSY. In the style of Warren Ellis and the late BERG studio, I'm logging my progress on anonymized projects here. (Code names make EVERYTHING more fun!)
- [⋯] ATREYU consumes most of my time, but we've almost finished a second big milestone, and have laid out a roadmap for 2016. Still porting a slew of client-side javascript code to server-side PHP.
- [⋯] Collaboration on the paper for EOMAIA is almost done—just have to start integrating peer-review feedback on Monday.
- [⋯] Learned a ton about the state of indoor geolocation technology for HANDFORD. Preliminary testing was wildly inaccurate, so we’ve moved to a human-in-the-middle fallback plan. More on this in an upcoming email.
- [✓] SOLOMON is done! Spent 3-4 full days of after-hours spreadsheeting with colleagues in other time zones; very grateful to have worked with them!
- [←] FAWKESTAIL is on the back burner until at least late February; after digging through notes on GitHub, I got the firmware updated, but I still need to set up a Raspberry Pi server for it.
- [→] On deck in the next two weeks: submitting DAMERON (!!!) and putting a bow on RIO GRANDE. (Dug up a lot of ancient paperwork for the former.)