Sorry that it’s been over a month since I last wrote. I’ve been working overtime with my colleagues to reopen a large library that adheres to Covid safety rules while also providing the community with the resources and services they need. Not easy.
And “reopening” is not the right word. We’ve been open all along, but simply shape- and medium-shifting as needed throughout this dreadful year. But I’m writing to you from within Snell Library at Northeastern University, and it feels pretty good. Onward.
An example of the suzani style of textiles from Tashkent, Uzbekistan, by the gifted Madina Kasimbaeva. Incredible as art and as textile technology.
Green hues are obtained from nutshells; yellow comes from saffron or onion peel; blue shades from indigo. After dying, threads are boiled with quartz and salt to lock in their colors.
Unfortunately, as Carrie Hertz, Curator of Dress and Textiles at the Museum of International Folk Art in Santa Fe notes in a blog post, like many other remarkable folk styles from remote parts of the world, as soon as photos of suzani works made their way onto the internet, the style was replicated on craft sites like Etsy, and then, as the conveyor belt of culture inevitably churns, it was quickly cloned, in turn, by mass-produced fashion companies:
“Can GPT-3 Pass a Writer’s Turing Test?” is both an exploratory and commonsensical new paper from Katherine Elkins and Jon Chun. Beginning with the earlier GPT-2 engine, they fine-tuned it and trained it on specific authors (from Chekhov to Carrie Bradshaw) to see if literature professors and students could separate the real writing of those authors from the fake text generated by the computer.
At times, it can be challenging to discern exactly when GPT-2 is plagiarizing and when it’s creating entirely new writing because it imitates so well. Moreover, we’ve run experiments in which both experts and students fail to distinguish between GPT-2 generated text and human. Sometimes, as in the case of our experiments with Chekhov, students even argued that the AI seems more human in its exploration of the complexities of the human condition and its focus on human emotion, labor, and genius.
For all of these reasons, one challenge of working with GPTs is determining whether a particular output is error or genius—much in the same way that AlphaGo [an AI engine that plays the game Go] made a never-before-seen move that was first classified as error but later acknowledged as creative and, indeed, pivotal. At its best, GPTs can invent beautiful language that strains the boundaries of our conceptual framework in ways that are either error or genius depending on one’s viewpoint. Trained on John Donne, GPT writes
Or, if being could express nothing, nothing would be more true.
Then would love be infinite, and eternity nothing.
Elkins and Chun’s conclusion seems just about right, and is one of the better summaries I’ve seen about the state of AI and human expression:
Can GPTs pass a writer’s Turing Test? Probably not, if all output is considered. But with a judicious selection of its best writing? Absolutely…Certainly, it’s not better than our very best writers and philosophers at their peak. Are its best moments better than many humans and even perhaps, our best writers at their worst? Quite possibly. But remember, it’s been trained on our own writing. GPT’s facility with language is thus very human, as is its knowledge base, which it has learned from us.
Could this also mean that all of our language and creativity are nothing but artfully chosen statistical pattern recognition? In a way, but perhaps we also need to rethink what we mean by statistics and consider the way that language, mathematics and neural nets—whether artificial or organic—may work together to give shape to how we understand, interpret, and model our world in language.
(For those who have recently subscribed to this newsletter, also see HI9: “GPT-2 and You.”)
A video series on how to add images of plants to books using the technology of the traditional printing press:
In HI23 I speculated about what we might archive from this year that would provide future people with perspective on this difficult year. The Boston Area Research Institute has helpfully saved web posts and other timely data that we can mine for insights:
The COVID in Boston Database [is] a multisource database that comprehensively captures how the dynamics of Boston shifted before, during, and after the shutdown in response to the pandemic.
Their very large data set of posts to Craigslist, for instance, details how people adjusted to working from home through objects discarded and acquired. A raw catalog of COVID needs.
I was curious about early uses of mobile phones in various media, and as an enthusiastic supporter of pop music from 1984, it turns out that “Our We Ourselves?” by the new wave band The Fixx was the first music video with a mobile phone in it. It’s the wonderfully brick-like Motorola KR999, but even in this now-comical early form factor, The Fixx was prescient about what devices like these would mean for our individuality and society. One thing leads to another.
Player piano-like encoded music rolls + saxophone = Playasax, patented at the beginning of the Great Depression:
Stanford acquired a rare surviving version in 2015: