Windows Copilot Newsletter #2
Google sticks AI into everything in sight, Microsoft uplifts OneDrive - does this mean Bing Chat will be reading all your files?
Welcome to the second edition of the Windows Copilot Newsletter, a weekly drop of all the most important and most interesting news about Windows Copilot, Bing Chat, and the rapidly expanding universe of AI chatbots.
Top News
ChatGPT+ can now browse the Web! After a bit of a fumbled introduction, users of OpenAI’s subscription-based AI Chatbot now can ‘browse with Bing’ - this means ChatGPT+ can transcend its ‘knowledge cutoff’ with real time data. Read about that here.
Google, the perennial also-ran in AI chatbots - a field they invented - announced this week that they would be enhancing Google Assistant, turning it into ‘Assistant with Bard’ - it should be able to rifle through err, I mean scan your emails, documents, calendars, and be extra-helpful. That’s the promise at least. Read it here.
Microsoft wants to be able to read your files, too! All in the service of helping you find all the places that Microsoft now allows you to put them with its expanded and enhanced OneDrive. Copilot integration is coming soon - so you won’t have to search for your files, just use OneDrive Copilot. If it can make any sense of your files. Read about that here.
Top Tips
Bing Chat has recently acquired some great ‘multimodal’ features - here’s an exploration of five of them.
Feeling the need to get Windows Copilot off your PC completely? Here’s how.
Safely & Wisely
It seems that sharing a link to a completion from Google Bard means that Google will immediately index that link in its search results - as SEO consultant Gagan Ghotra recently found out. So be careful when you share links to your AI chatbot sessions! Read about that here.
Meanwhile, ChatGPT shouldn’t be able to solve a CAPTCHA - it’s got guardrails installed to prevent that misuse. Unless you tell the chatbot it’s a locket from your deceased grandmother. Then it will do what it’s not supposed to. Read that here.
Longreads
The latest Windows Copilot Strategies newsletter looks at the ‘forbidden’ knowledge secreted away within AI chatbots - and asks if it’s really as secure as it should be to keep us safely away from some very bad bits of knowledge AI chatbots pick up in their training:
It needs to be understood that any sufficiently large language model - more than a few tens of millions of parameters - will be chock full of such forbidden knowledge: how to make bombs, how to commit genocide, how to slander and defame, how to rob a bank, how to create a revolutionary vanguard and overthrow a government, etc. All of that information has been fed into these AI chatbots during their lengthy and expensive training. As that process consistently focuses on quantity over any particular quality of information, it can safely be assumed that pretty much everything we know - from the good to the very, very bad - sits inside the largest of these models.
Read that here.
And if you’re ready for a rant that ranges across AI, late capitalism and the death of physical cash - trust me, it all makes sense - you’ll want to read this.
Update on my book Getting Started with ChatGPT and AI Chatbots
The last ‘front matter’ pieces - my bio and acknowledgements - went to the publisher a few hours ago. From here on in it’s in the competent hands of BCS Publishing. If you’re interested in pre-ordering Getting Started with ChatGPT and AI Chatbots, click here.
If you know someone who might enjoy this newsletter - please share it with them!
Until next week,