At times it felt less like Succession than Fawlty Towers, not so much Shakespearean tragedy as Laurel and Hardy farce. OpenAI is the hottest tech company today thanks to the success of its most famous product, the chatbot ChatGPT. It was inevitable that the mayhem surrounding the sacking, and subsequent rehiring, of Sam Altman as its CEO would play out across global media last week, accompanied by astonishment and bemusement in equal measure.
For some, the farce spoke to the incompetence of the board; for others, to a clash of monstrous egos. In a deeper sense, the turmoil also reflected many of the contradictions at the heart of the tech industry. The contradiction between the self-serving myth of tech entrepreneurs as rebel “disruptors,” and their control of a multibillion-dollar monster of an industry through which they shape all our lives. The tension, too, between the view of AI as a mechanism for transforming human life and the fear that it may be an existential threat to humanity.
Few organizations embody these contradictions more than OpenAI. The galaxy of Silicon Valley heavyweights, including Elon Musk and Peter Thiel, who founded the organization in 2015, saw themselves both as evangelists for AI and heralds warning of the threat it posed.
Photo: AFP
“With artificial intelligence we are summoning the demon,” Musk portentously claimed.
‘PREPPERS’
The combination of unrestrained self-regard for themselves as exceptional individuals conquering the future, and profound pessimism about other people and society has made fear of the apocalypse being around the corner almost mandatory for the titans of tech. Many are “preppers,” survivalists prepared for the possibility of a Mad Max world.
“I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force and a big patch of land in Big Sur I can fly to,” Altman told the New Yorker shortly after OpenAI was created.
The best entrepreneurs, he claimed, “are very paranoid, very full of existential crises.” Including, inevitably, about AI.
OpenAI was created as a non-profit-making charitable trust, the purpose of which was to develop artificial general intelligence, or AGI, which, roughly speaking, is a machine that can accomplish, or surpass, any intellectual task humans can perform. It would do so, however, in an ethical fashion to benefit “humanity as a whole.”
Then, in 2019, the charity set up a for-profit subsidiary to help raise more investment, eventually pulling in more than US$11 billion from Microsoft. The non-profit parent organization, nevertheless, retained full control, institutionalizing the tension between the desire to make a profit and doomsday concerns about the products making the profit. The extraordinary success of ChatGPT only exacerbated that tension.
Two years ago, a group of OpenAI researchers left to start a new organization, Anthropic, fearful of the pace of AI development at their old company. One later told a reporter that “there was a 20 percent chance that a rogue AI would destroy humanity within the next decade.”
That same dread seems to have driven the attempt to defenestrate Altman and the boardroom chaos of the past week.
CREATIVE DESTRUCTION
One may wonder about the psychology of continuing to create machines that one believes may extinguish human life. The irony, though, is that while fear of AI is exaggerated, the fear itself poses its own dangers. Exaggerated alarm about AI stems from an inflated sense of its capabilities. ChatGPT is superlatively good at predicting what the next word in a sequence should be; so good, in fact, that we imagine we can converse with it as with another human.
But it cannot grasp, as humans do, the meanings of those words, and has negligible understanding of the real world. We remain far from the dream of “artificial general intelligence.”
“AGI will not happen,” Grady Booch, chief scientist for software engineering at IBM, has suggested, even “in the lifetime of your children’s children.”
For those in Silicon Valley who disagree, believing AGI to be imminent, humans need to be protected through “alignment” — ensuring that AI is “aligned with human values and follows human intent.” That may seem a rational way of countervailing any harm AI might cause. Until, that is, you start asking what exactly are “human values,” who defines them, and what happens when they clash?
Social values are always contested, and particularly so today, in an age of widespread disaffection driven often by the breakdown of consensual standards. Our relationship to technology is itself a matter for debate. For some, the need to curtail hatred or to protect people from online harm outweighs any rights to free speech or privacy. This is the sentiment underlying Britain’s new Online Safety Act. It’s also why many worry about the consequences of the law.
Then there is the question of disinformation. Few people would deny that disinformation is a problem and will become even more so, raising difficult questions about democracy and trust. The question of how we deal with it remains, though, highly contentious, especially as many attempts to regulate disinformation results in even greater powers being bestowed on tech companies to police the public.
ALGORITHMIC BIAS
Meanwhile, another area of concern, algorithmic bias, highlights the weaknesses of arguments for “alignment.” The reason algorithms are prone to bias, especially against minorities, is precisely because they are aligned to human values.
AI programs are trained on data from the human world, one suffused with discriminatory practices and ideas. These become embedded into AI software, too, whether in the criminal justice system or healthcare, facial recognition or recruitment.
The problem we face is not that machines may one day exercise power over humans. That is speculation unwarranted by current developments. It is rather that we already live in societies in which power is exercised by a few to the detriment of the majority, and that technology provides a means of consolidating that power.
For those who hold social, political and economic power, it makes sense to project problems as technological rather than social and as lying in the future rather than in the present.
There are few tools useful to humans that cannot also cause harm. But they rarely cause harm by themselves; they do so, rather, through the ways in which they are exploited by humans, especially those with power. That, and not fantasy fears of extinction, should be the starting point for any discussion about AI.
Nov. 11 to Nov. 17 People may call Taipei a “living hell for pedestrians,” but back in the 1960s and 1970s, citizens were even discouraged from crossing major roads on foot. And there weren’t crosswalks or pedestrian signals at busy intersections. A 1978 editorial in the China Times (中國時報) reflected the government’s car-centric attitude: “Pedestrians too often risk their lives to compete with vehicles over road use instead of using an overpass. If they get hit by a car, who can they blame?” Taipei’s car traffic was growing exponentially during the 1960s, and along with it the frequency of accidents. The policy
Hourglass-shaped sex toys casually glide along a conveyor belt through an airy new store in Tokyo, the latest attempt by Japanese manufacturer Tenga to sell adult products without the shame that is often attached. At first glance it’s not even obvious that the sleek, colorful products on display are Japan’s favorite sex toys for men, but the store has drawn a stream of couples and tourists since opening this year. “Its openness surprised me,” said customer Masafumi Kawasaki, 45, “and made me a bit embarrassed that I’d had a ‘naughty’ image” of the company. I might have thought this was some kind
What first caught my eye when I entered the 921 Earthquake Museum was a yellow band running at an angle across the floor toward a pile of exposed soil. This marks the line where, in the early morning hours of Sept. 21, 1999, a massive magnitude 7.3 earthquake raised the earth over two meters along one side of the Chelungpu Fault (車籠埔斷層). The museum’s first gallery, named after this fault, takes visitors on a journey along its length, from the spot right in front of them, where the uplift is visible in the exposed soil, all the way to the farthest
The room glows vibrant pink, the floor flooded with hundreds of tiny pink marbles. As I approach the two chairs and a plush baroque sofa of matching fuchsia, what at first appears to be a scene of domestic bliss reveals itself to be anything but as gnarled metal nails and sharp spikes protrude from the cushions. An eerie cutout of a woman recoils into the armrest. This mixed-media installation captures generations of female anguish in Yun Suknam’s native South Korea, reflecting her observations and lived experience of the subjugated and serviceable housewife. The marbles are the mother’s sweat and tears,