Can AI Avoid the Enshittification Trap?

0
business-news-2-768x548.jpg


I recently vacationed in Italy. As one does these days, I ran my route past GPT-5 for sightseeing suggestions and restaurant recommendations. The bot reported that the best choice for dinner at our hotel in Rome was a short walk down Via Margutta. It turned out to be one of the best meals I can remember. When I got home, I asked the model how it chose that restaurant, which I hesitate to reveal here in case I want a table sometime in the future (Hell, who knows if I'll even return: it's called Babette. Call ahead for reservations.) The response was complex and impressive. Among the factors were rave reviews from locals, notices in food blogs and the Italian press, and the restaurant's famous combination of Roman and contemporary cooking. Oh, and the short walk.

Something was also needed from my end: trust. I had to buy into the idea that GPT-5 was an honest broker, picking my restaurant without bias; that the restaurant was not shown to me as sponsored content and my check was not cut. I could have done deep research myself to double check the recommendation (I did look up the website), but the point of using AI is to bypass that friction.

The experience strengthened my confidence in AI outcomes, but also made me wonder: As companies like OpenAI become more powerful, and as they try to repay their investors, will AI be susceptible to the erosion of value that seems endemic to the tech apps we use today?

Word Play

Writer and tech critic Cory Doctorow calls that erosion “inshittification.” His starting point is that platforms like Google, Amazon, Facebook and TikTok start with the goal of pleasing users, but once the companies overcome competitors, they intentionally become less useful in order to collect greater profits. After WIRED republished Doctorow's pioneering work 2022 essay about the phenomenon, the term came into the vernacular, mainly because people recognized that it was completely planned. Enshittification was chosen as the American Dialect Society's word of the year for 2023. The concept has been cited so often that it surpasses profanity, and appears in places that normally hold their noses to such a word. Doctorow just published a book of the same name on the subject; the cover image is the emoji for… guess what.

If chatbots and AI agents become enshittified, it could be worse than Google Search becoming less useful, Amazon results being plagued with ads, and even Facebook showing less social content in favor of anger-generating clickbait.

AI is on a trajectory to be a constant companion, providing answers to many of our requests. People already rely on it to help interpret current events and get advice on all kinds of buying choices – and even life choices. Due to the massive cost of creating a full AI model, it is fair to assume that only a few companies will dominate the field. All of them plan to spend hundreds of billions of dollars in the coming years to improve their models and get them into the hands of as many people as possible. Right now, I'd say AI is in what Doctorow calls the “good for the users” stage. But the pressure to recoup the massive capital investment will be enormous – especially for companies whose user base is locked in. Those conditions, as Doctorow writes, allow companies to abuse their users and business customers “to claw back all the value for themselves.”

When one imagines the enshittification of AI, the first thing that comes to mind is advertising. The nightmare is that AI models will make recommendations based on which companies have paid for placement. That's not happening right now, but AI companies are actively exploring the advertising space. In a recent interviewOpenAI CEO Sam Altman said, “I believe there's probably some cool ad product that we can do that's a net win for the user and kind of positive for our relationship with the user.” Meanwhile, OpenAI just announced a deal with Walmart so the retailer's customers can shop in the ChatGPT app. Can't imagine a conflict there! The AI ​​search platform Perplexity has a program where sponsored results appear in clearly marked follow-ups. But, it promises, “these ads will not change our commitment to maintaining a trusted service that provides you with immediate, unbiased answers to your questions.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *