Where Tech Leaders and Students Really Think AI Is Going

0
business-news-2-768x548.jpg


The future never feels completely certain. But in this age of rapid, intense transformation—political, technological, cultural, scientific—it's as hard as it's ever been to get a sense of what's around the next corner.

Here at WIRED, we're obsessed with what comes next. Our pursuit of the future usually takes the form of powerfully reported stories, in-depth videos, and interviews with the people who help define it. That is also why we recently adopted a new tagline: For Future Reference. We're focused on stories that don't just explain what's coming, but help shape it.

In that spirit, we recently interviewed a variety of luminaries from the various worlds that WIRED touches—and who joined us recent Big Interview event in San Francisco—like students who have spent their entire lives inundated with technologies that are increasingly likely to disrupt their lives and livelihoods. The main focus was not surprisingly on artificial intelligencebut it extended to other areas of culture, technology and politics. Think of it as a benchmark of how people think about the future today—and maybe even a rough map of where we're going.

AI everywhere, all the time

What is clear is that AI has been as integrated into people's lives as search since the Alta Vista days. Like search, the use cases tend toward the practical or mundane. “I use a lot of LLMs to answer all the questions I have throughout the day,” says Angel Tramontin, a student at UC Berkeley's Haas School of Business.

Several of our respondents noted that they have used AI in the last few hours, even in the last few minutes. Lately, Anthropic co-founder and president Daniela Amodi has been using her company's chatbot to help with childcare. “Claude actually helped my husband and I potty train our older son,” she says. “And I recently used Claude to do the equivalent of panic Googling symptoms for my daughter.”

She is not the only one. Wicked director Jon M. Chu turned to LLMs “just to get some advice about my kids' health, which might not be the best,” he says. “But it's a good starting point.”

AI companies themselves see health as a potential growth area. OpenAI announced ChatGPT Health earlier this month, revealing that “hundreds of millions of people” use the chatbot to answer health and wellness questions every week. (ChatGPT Health introduces additional privacy measures, given the sensitivity of the questions.) Anthropic's Claude for Healthcare targets hospitals and other healthcare systems as customers.

Not everyone we interviewed took such an immersive approach. “I try not to use it at all,” says UC Berkeley undergraduate student Sienna Villalobos. “When it comes to doing your own work, it's very easy to have an opinion. AI shouldn't be able to give you an opinion. I think you should make that yourself.”

That view may increasingly be in the minority. Nearly two-thirds of American teens use chatbots, according to a recent Pew Research to study. About 3 in 10 report using it daily. (Given how intertwined Google Gemini is with search these days, many more may be using AI without even realizing it or intending to.)

Ready to launch?

The pace of AI development and deployment is relentless, despite concerns about them possible effects on mental healththe environmentand society in general. In this wide-open regulatory environment, companies are largely left to self-police. So what questions should AI companies ask themselves before any launch, without guardrails from lawmakers?

''What could go wrong?'' is a really good and important question that I wish more companies would ask,” says Mike Masnick, founder of the tech and policy news site Techdirt.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *