On May 24, 1844, over an experimental line from Washington, D.C. to Baltimore, Samuel Morse sent the world’s first telegram message. It marked an almost instant shift in the nature of human connection and laid, in more ways than one, the groundwork for the stunning array of technological advancements to come. The telegram read, simply: “What hath God wrought?”
Though it is by nature retrospective — an epigrammatic hint at our excitement for novel or fun ideas and the known probability that we’ll somehow come to regret it — it’s a question we’d be wise to ask more often. Human beings, after all, approach technological inventiveness the way America tends to approach law enforcement: shoot now, ask questions later.
But in order to fully make sense of this, there are some things we need to understand — starting with television. A crude speck on the technological horizon in the 1920s, it was the primary medium for influencing public opinion by the 1950s. Far from what Mark Fowler, then-chair of the US FCC, called in 1981 “just another appliance . . . a toaster with pictures”, it wasn’t long before TV penetrated households in full colour. By the mid-1980s, it had firmly established itself as a culturally imperative bit of furniture.
With the exception perhaps of the radio, TV marked the arrival of commercial, pure-bred mass-entertainment; a shift away from the more utilitarian innovations of earlier. Its success, according to David Foster Wallace in his lengthy 1993 essay “E Unibus Pluram: Television and US Fiction”, came partly from reflecting what people want to see. “There is a lot of money at stake, after all; and television retains the best demographers applied social science has to offer, and these researchers can determine precisely what Americans in 1990 are, want, see: what we as Audience want to see ourselves as. Television, from the surface on down, is about desire,” he wrote.
Here we stumble upon a feature of TV acutely familiar to those of us living, say, post-2010. “Television’s biggest minute-by-minute appeal is that it engages without demanding,” Wallace says. Like chocolate or alcohol, TV is an indulgence — totally fine in small quantities, even encouraged, but instantly problematic once it’s regarded as any sort of need. “On the surface of the problem, television is responsible for our rate of its consumption only in that it’s become so terribly successful at its acknowledged job of ensuring prodigious amounts of watching. Its social accountability seems sort of like that of designers of military weapons: unculpable right up until they get a little too good at their job,” Wallace adds.
While network executives are not likely to grasp the higher truth being “too good at their job” implies, it’s nevertheless true that they enjoyed tremendous success in not necessarily creating, but isolating a demographic of lonely, addicted, probably self-conscious watchers. And as any network executive worth their weight in advertising revenue can tell you, a group of shmucks like that will pretty much buy anything.
“Classic television commercials were all about the group,” Wallace notes. “They took the vulnerability of Joe Briefcase” — the name he gives to the average lonely person — “sitting there, watching, lonely, and capitalised on it by linking purchase of a given product with Joe B.’s inclusion in some attractive community. This is why those of us over twenty-one can remember all those interchangeable old commercials featuring groups of pretty people in some ecstatic context having just way more fun than anybody has a license to have, and all united as Happy Group by the conspicuous fact that they’re holding a certain bottle of pop or brand of snack.”
If this has induced in you a colon-puckering sense of déjà vu, it’s because the same approach to “marketable image” got cranked all the way up with the dawn of the internet — not to mention that creepiest of creeps, social media, which took “marketable image” and stretched it out to “marketable lifestyle”. Where TV had “the best demographers applied social science has to offer”, the internet (and social media) has that plus an arsenal of high-powered algorithms — designed to offer a user experience akin to being trapped in quicksand — so ominously effective, so science-fictionally extreme, even Isaac Asimov might have snapped.
But for social media to ensnare us, it first needs to entice us. Part of how or why this occurs comes down to the way new technologies are marketed. That Facebook, Instagram, Twitter and dozens of other platforms can in any way be considered “social” media is, I think, one of the most comedic peculiarities of the 21st century. (There is no need to mention the similarly hilarious fact that bitcoin, the world’s first decentralised cryptocurrency, is still — 14 years after “Satoshi Nakamoto” crapped it out in 2008 — struggling to find a legitimate foothold beyond money laundering . . . but there you go.)
In a letter published with Facebook’s IPO announcement in 2012, a palpably optimistic young Zuckerberg said the Facebook team was “extending people’s capacity to build and maintain relationships”, and that by doing so, “we hope to rewire the way people spread and consume information”. Does anyone else, looking back from the year 2023, detect a slight sense of naivety? A hint of boyish ignorance? Am I alone in finding the remarks kind of laughable given that Facebook (1) turned out to be a death sentence for real social relationships, and (2) could not have been worse for human-info interactions? Ditto for Instagram, Twitter, Tik Tok etc., which hitched their wagons to Facebook’s mission and went about the financially lucrative work of dividing humanity.
And yet, I would argue that these platforms did fulfill their stated objective of connecting people . . . in the beginning. If you can recall, those first few years of Facebook, for example, were almost preternaturally fun and pure. But those technologies that develop on the sharp edge of innovation are burdened, uniquely so, with the ever-present need for growth — more users, more engagement, more revenue — and it took little time for that original, perhaps idealistic purpose to be supplanted by a new one: the two-pronged determination to put adverts in front of eyes, and to collect and sell user information.
Now, you know that I know that you know this stuff already. The perils of our Internet Age have been thrashed to bits by a long and growing list of op-eds, research papers, whistleblowers, and expert testimonies. What we forget, however, is that TV failed to accomplish in 80 years what social media did with ease in less than 20. We are on an unnervingly steep technological growth curve — one which is fundamentally irreversible and only growing steeper.
It was presumably with similar concerns in mind that the Pew Research Centre published a report in June 2020, in which experts offer a few cautious predictions about the trajectory of technological innovation. “We will use technology to solve the problems the use of technology creates, but the new fixes will bring new issues,” said Peter Lunenfeld, a professor of design, media arts and digital humanities at the University of California. “Every design solution creates a new design problem, and so it is with the ways we have built our global networks.”
Larry Rosen, a professor emeritus of psychology at California State University: “Smart people are already working on many social issues, but the problem is that while society is slow to move, tech moves at lightning speed. I worry that solutions will come after the tech has either been integrated or rejected.”
Oscar Gandy, a professor emeritus of communication at the University of Pennsylvania: “Corporate actors will make use of technology to weaken the possibility for improvements in social and civic relationships.”
An unnamed chair of political science: “Technology always creates two problems for every one it solves. At some point, humans’ cognitive and cooperative capacities — largely hard-wired into their brains by millennia of evolution — can’t keep up. Human technology probably overran human coping mechanisms sometime in the later 19th century. The rest is history.”
In fact, the whole report is a trove of interesting perspectives, and not all of them are as critical of technology’s long arms and galloping pace as those above. But many of these views seem to be not so much about what these experts think themselves, but more about what technology might think of itself in 10 years’ time. Only Yaakov J. Stein, chief technology officer at RAD Data Communications, appeared to break the trend:
“The problem with AI and machine learning is not the sci-fi scenario of AI taking over the world and not needing inferior humans. The problem is that we are becoming more and more dependent on machines and hence more susceptible to bugs and system failures,” he said. “With the tremendous growth in the amount of information, education is more focused on how to retrieve required information rather than remembering things, resulting not only in less actual storage but less depth of knowledge and the lack of ability to make connections between disparate bits of information, which is the basis of creativity.”
So at what point do we ask, with absolute seriousness: “What hath God wrought?” Because it’s here I’ll suggest to you that we may no longer be living in the age of social media — the “Internet Age”. We are, in fact, staring down the barrel of some truly awe- and fear-inspiring new developments; and though some of them are here now, dazzling a bewildered public with their embryonic powers, we are still far from understanding them.
Which is kind of insane, when you think about it — that human beings can bring something to a version of life without really knowing what it is, what it can do, what it might do years from now.
If, like me, you lack the basic understanding required to confidently explain what ChatGPT is, you may be more familiar with the collective panic it inspired when it launched at the end of November last year. A staggering leap forward in what is called “generative AI”, ChatGPT was developed by San Francisco-based firm OpenAI and allows users to type questions and receive “conversational” responses. But the platform has proven to be unexpectedly versatile. It can also write and de-bug computer programs, compose song lyrics, plays, poetry and student essays. It can write jokes and job applications, all in response to simple text prompts. It’s also racist, sexist and insidiously biased, and — by OpenAI’s own admission — “sometimes writes plausible-sounding but incorrect or non-sensical answers.”
Nevertheless, ChatGPT was recently crowned the fastest growing consumer application of all time, hitting 100 million users in just two months. By comparison, the telephone took 75 years, the internet seven years, Facebook 4.5 years, Instagram 2.5 years, and Tik Tok nine months to reach the same number, according to ResearchGate.
In January, Nick Cave (of “Nick Cave and the Bad Seeds” fame) was sent the lyrics to a song written by ChatGPT “in the style of Nick Cave”. In an impressively considered response — between calling the lyrics “bullshit” and “a grotesque mockery of what it is to be human” — Cave captured with typical eloquence what many people seem to be feeling:
“I understand that ChatGPT is in its infancy but perhaps that is the emerging horror of AI — that it will forever be in its infancy, as it will always have further to go, and the direction is always forward, always faster. It can never be rolled back, or slowed down, as it moves us toward a utopian future, maybe, or our total destruction. Who can possibly say which?”
One could argue that human beings were never meant to evolve this far; that we should have been bound inexorably to the natural world; that we’ve become, in a sense, too big for our boots — a hideously jacked-up answer to “Icarus”. One might also argue that the further we continue to blitz past that evolutionary line, the further we need to adapt. Which again is no issue, as long as we agree that at some point we might reach a limit of a different kind.
My feeling, in the end, is that we seem to be hurtling towards some undefined point of no return. I am not as convinced as some appear that the answer to technology’s many documented problems — even in light of its obvious benefits — is more technology. The net benefit, according to the Middle Gut Institute of Scales and Measurements, tips heavily to the negative.
But few things illustrate this point better than the last lines of one extremely short story called “Answer” by Fredric Brown, published in 1954: Two scientists have, at an unspecified point in the presumably distant future, just finished work on a sort of cosmic mega-computer. They throw the switch, linking “all of the monster computing machines of all the populated planets in the universe — ninety-six billion planets — into the supercircuit that would connect them all into one supercalculator, one cybernetics machine that would combine all the knowledge of all the galaxies.”
Dwar Ev stepped back and drew a deep breath. “The honour of asking the first question is yours, Dwar Reyn.”
“Thank you,” said Dwar Reyn. “It shall be a question which no single cybernetics machine has been able to answer.”
He turned to face the machine. “Is there a God?”
The mighty voice answered without hesitation, without the clicking of a single relay.
“Yes, now there is a God.”
Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch. A bolt of lightning from the cloudless sky struck him down and fused the switch shut.
Oliver Gray is Contributing Editor of Art of the Essay.
Stay up to date on all the latest commentary, analysis and opinion pieces from Art of the Essay by following on Twitter, Instagram, Facebook and LinkedIn.