There have been a three points in my life where I have felt something like "this is something huge...it changes everything". The first was during an interaction with a Commodore Pet computer, loading a mine sweeper-type of game. Having a device that allowed users to change what it did and "executed" with simple commands was stunning. It felt like a writable and create-able world, at the hands of each individual. The second moment was in early 2000 as digital networks gave individuals the ability to effortlessly share their thoughts, reflections, and creations with the world through web 2.0/social media - anyone creating and anyone sharing.
The third moment happened last week. After playing with DALLE-2 over the last few months, I started playing with Moonbeam and then with ChatGPT. It has been an affronting experience and calls into question much of what we teach in education and how we teach it. I'm not alone. The new generation of LLMs (like GPT3) change everything. The costs also put these models out of reach of most universities and companies. GPT3 cost in the range of $10m to train (and generated an enormous carbon impact). However, those costs are likely easy to recover as smaller companies and startups use GPT for consumer facing products (Moonbeam, Jasper) and larger tech companies like Canva integrate image and language models into existing products. On the consumer side, it appears that OpenAI is the front runner in creating the infrastructure for product augmentation and the platform for AI-enabled services.
A few articles of interest in this area:
Galactica was released by Meta last week...and then promptly pulled. Galactica uses academic articles and, according to its founding paper, "can store, combine and reason about scientific knowledge". Criticism was strong about errors and inaccuracies. A key issue raised was due to issues of trust: "Nothing Galactica generates is useful, because it's absolutely untrustworthy." This is a key issue. As we consider a future where AI interactions with us cognitively and emotionally are the norm, we have to transfer our mechanisms of trust into a new space where we develop trust signals with a new agent. We won't be able to work well with technologies that we don't trust.
There is an economic side to AI that is causing stress to big tech. Alexa is forecast to lose $10b. The AI landscape feels like things did in pre-dotcom crash - lots of ideas, lots of innovation, lots of money, big losses.
Advances in AI Capacity
During our Empowering Learners for the Age of AI conference this week, Chris Dede delivered a keynote about augmenting human intelligence with AI. This reflects the views of some technology leaders that AI will serve a primary amplification capacity.
AI and Science
CSIRO, Australia, has released a report on how AI will impact science: "AI is no longer just the domain of computer scientists or mathematicians; it is now a significant enabling force across all fields of science"
Social and Ethics
How do we manage the "kill chain" in robots/AI. "Weaponised applications of these newly capable robots will also harm public trust in the technology in ways that damage the tremendous benefits they will bring to society."
Robot Landlords are Buying up Houses The social effects of AI are felt on all aspects of society. Even home buying is untouched.