Generative AI tools are generating less interest than just a few months ago.
Is the AI boom already over? When generative AI products started rolling out to the general public last year, it kicked off [a frenzy]( of excitement and fear. People were amazed at the images and words these tools could create from just a single text prompt. Silicon Valley salivated over the prospect of a transformative new technology, one that it could make a lot of money off of after years of stagnation and the flops of crypto and the metaverse. And then there were the concerns about what the world would be after generative AI transformed it. Millions of jobs could be lost. It might become impossible to tell what was real or what was made by a computer. And if you want to get really dramatic about it, the end of humanity may be near. We glorified and dreaded the incredible potential this technology had. Several months later, the bloom is coming off the AI-generated rose. Governments are [ramping]( [up]( efforts to regulate the technology, creators [are suing]( over alleged intellectual property and copyright violations, people are balking at the privacy invasions (both [real]( and [perceived]() that these products enable, and there are plenty of reasons to question how accurate AI-powered chatbots really are and how much people should depend on them. Assuming, that is, theyâre still using them. Recent reports suggest that consumers are starting to lose interest: The new AI-powered Bing search [hasnât made a dent]( in Googleâs market share, ChatGPT [is losing users]( for the first time, and the bots [are still prone to basic errors]( that make them impossible to trust. In some cases, they may be even less accurate now than they were before. Is the party over for this party trick? Generative AI is a powerful technology that isnât going anywhere anytime soon, and the chatbots built with this new technology are one of the most accessible tools for consumers, who can directly access and try them out for themselves. But recent reports suggest that, as the initial burst of excitement and curiosity fades, people may not be as into chatbots as many expected. OpenAI and its ChatGPT chatbot quickly took the lead as the buzziest generative AI company and tool out there, no doubt helped along by being one of the first companies to release tools to the general public, as well as a partnership with Microsoft worth billions of dollars. That partnership led to Microsoftâs [big February announcement]( about how it was incorporating a custom chatbot built with OpenAIâs large language model (LLM) â this is also what powers ChatGPT â into Bing, its web search engine. Microsoft hailed generative AI-infused search as the future of web search. Instead of getting a bunch of links or knowledge windows back, this new AI chatbot would combine information from multiple websites into one response. There was plenty of hype, and Bing suddenly went from being a punchline to a potential rival in a market so completely dominated by Google that itâs literally synonymous with it. Google [rushed]( to release a chatbot of its own, called Bard. Meta, not to be outdone and possibly still smarting from its disastrous metaverse pivot, released [not one]( but [two]( [open source](([ish]() versions of its large language model. OpenAI licensed ChatGPT out to other companies, and dozens [lined up]( to put it in their own products. That reinvention may be a longer way off than the excitement from a few months ago suggested, assuming it happens at all. A [recent Wall Street Journal article](said that the new Bing isnât catching on with consumers, citing two different analytics firms that had Bingâs market share at roughly the same now as it was in the pre-AI days of January. (Microsoft told WSJ that those firms were underestimating the numbers but wouldnât share its internal data.) [According to Statcounter](, Microsoftâs web browser, Edge, which consumers had to use in order to access Bing Chat, did get a user bump, but still barely moved the needle and has already started to recede, while Chromeâs market share increased during that time. There is still hope for Microsoft, however. When Bing Chat is easier or possible to access on different and more popular browsers, it may well get more use. Microsoft told WSJ it plans to do this soon. Meanwhile, OpenAIâs ChatGPT seems to be flagging, too. For the first time since its release last year, traffic to the ChatGPT website fell by almost 10 percent in June, [according to the Washington Post](. Downloads of its iPhone app have fallen off, too, the report said, although OpenAI wouldnât comment on the numbers. And Google has yet to integrate its chatbot into its search services as extensively as Microsoft did, keeping it off the main search page and continuing to frame it as an experimental technology that âmay display inaccurate or offensive information.â Google didnât respond to a request for comment on Bard usage numbers. Googleâs approach may be the right one, given how problematic some of these chatbots can be. We now have myriad examples of chatbots going off the rails, from getting [really personal]( with a user to spouting off [complete inaccuracies as truth]( to containing the [inherent]( [biases]( that seem to permeate all of tech. And while some of those issues have been mitigated by some companies to some degree along the way, things seem to be getting worse, not better. The Federal Trade Commission is [looking into]( ChatGPTâs inaccurate responses. A [rrecent study]( showed that OpenAIâs GPT-4, the newest version of its LLM, showed marked declines in accuracy in some areas in just a few months, indicating that, if nothing else, the model is changing or being changed over time, which can cause drastic differences in its output. And attempts by journalistic outlets to fill pages with AI-generated content have resulted in [multiple]( and [egregious]( errors. As chatbot-fueled cheating proliferated, OpenAI had to pull its own tool to detect ChatGPT-generated text because [it sucked](. Last week, eight companies behind LLMs, including OpenAI, Google, and Meta, took their models to [DEF CON](, a massive hacker convention, to have as many people as possible test their models for accuracy and safety in a [first-of-its-kind stress test](, a process called âred teaming.â The Biden administration, which has been making [a lot of noise]( about the importance of AI technology being developed and deployed safely, supported and promoted the event. President Bidenâs science adviser and the director of the White House Office of Science and Technology, Arati Prabhakar, told Vox it was a chance to âreally figure out how well these chatbots are working; how hard or easy is it to get them to come off the rails?â The goal of the challenge was to give the companies some much-needed data on if and how their models break, supplied by a diverse group of people who would presumably test it in ways the companiesâ internal teams hadnât. Weâll see what they do with that data, and itâs a good sign that they participated in the event at all, though the fact that the White House urged them to do so surely was a motivating factor. In the meantime, these models and the chatbots created from them are already out there being used by hundreds of millions of people, many of whom will take what these chatbots say at face value. Especially when they may not know that the information is coming from a chatbot in the first place (CNET, [for example](, barely disclosed which articles were written by bots). As various reports show a waning interest in some AI-powered tools from the public, however, they need to get better if they want to survive. We also [donât even know]( if the technology actually can be fixed, given how even their own developers claim not to know all of their inner workings. Generative AI can do some amazing things. Thereâs a reason why Silicon Valley is excited about it and so many people have tried it out. What remains to be seen is whether it can be more than a party trick, which, given its still-prevalent flaws, is probably all it should be for now. âSara Morrison, senior reporter [A vintage illustration of the head of a man with an electronic circuit board for a brain.]( Getty Images [What normal Americans â not AI companies â want for AI]( [Public opinion about AI can be summed up in two words: Slow. Down.]( [Elon Musk is standing with his shirt open and arm outstretched, all in a style referencing a romance novel cover. Heâs squeezing a bird that looks like the twitter logo in his other hand. Thereâs a huge pile of crashed Teslas, a cloud of smoke and a rocket launching in the background.]( Alex Fine for Vox [How does Elon Musk get away with it all?]( [The billionaireâs heroic image is built on media praise, breathless fans, and ⦠romance novel tropes.]( [An employee inspects energy-saving bulbs at a factory on June 17, 2022 in Lianyungang, Jiangsu, Province of China.]( Geng Yuhe/VCG via Getty Images [The LED light revolution has only just begun]( [The heir to the incandescent bulb is just getting started.](
Â
[Learn more about RevenueStripe...]( [Meta founder Mark Zuckerberg and Twitter CEO Elon Musk.]( Mandel Ngan and Alain Jocard/AFP via Getty Images [Why in the world are Elon and Zuck planning to punch each other?]( [Maybe theyâre not?]( [A person having a videoconference with a doctor.]( David Espejo/Getty Images [Is Zoom using your meetings to train its AI?]( [Zoom returns to the office â and to its problematic privacy ways.]( Support our work Vox Technology is free for all, thanks in part to financial support from our readers. Will you join them by making a gift today? [Give]( [Listen To This]( [Listen to This]( Can AI help us talk to animals? Two scientists, Karen Bakker and Aza Raskin, explain how AI might help us translate animal communication, and what we might learn from their squawks, chirps, songs, and chatter. Recorded live at the Aspen Ideas Festival. [Listen to Apple Podcasts]( [This is cool]( [The Iceman ... baldeth?](
Â
[Learn more about RevenueStripe...]( [Facebook]( [Twitter]( [YouTube]( This email was sent to {EMAIL}. Manage yourâ¯[email preferences]( , orâ¯[unsubscribe](param=tech) â¯to stop receiving emails from Vox Media. View our [Privacy Notice]( and our [Terms of Service](. Vox Media, 1201 Connecticut Ave. NW, Washington, DC 20036. Copyright © 2023. All rights reserved.