If 2023 was the year of wonders about artificial intelligence, 2024 was the year of trying to make those wonders do something useful without breaking the bank.
There was a “movement from presenting a model to actually building a product,” said the Princeton University computer science professor and co-author of the new book “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t.” says Arvind Narayanan. and how to tell the difference. ”
The first 100 million or so people who tried ChatGPT when it was released two years ago actively sought out chatbots, finding them surprisingly useful at some tasks and laughably mediocre at others.
Today, such generative AI technologies are being incorporated into more and more technology services, whether we are looking for it or not. AI generated answer New AI technology in Google search results, photo editing tools, and more.
“The main mistake in generative AI last year was that companies were releasing these very powerful models without concrete ways for people to utilize them,” Narayanan said. . “What we’re seeing this year is that we’re slowly building products that allow us to leverage these capabilities to help people.”
At the same time, since OpenAI released GPT-4 in March 2023 and competitors introduced AI large-scale language models with similar performance, these models have become significantly “bigger and qualitatively The hyperbolic expectations of AI competing every few months have been reset. They have intelligence superior to humans, Narayanan said. This also means that the public debate has shifted from “Will AI kill us?” We need to treat it like regular technology, he said.
AI sticker shock
At this year’s quarterly earnings conferences, tech executives often heard questions from Wall Street analysts seeking guarantees of future returns on their massive spending on AI research and development. Building AI systems behind generative AI tools like OpenAI’s ChatGPT and Google’s Gemini requires investments in: Energy-intensive computing systems moving powerfully and expensive AI chips. They require so much power that the tech giants announced power supply agreements this year. use nuclear power to help them operate.
“We’re talking about hundreds of billions of dollars of capital being poured into this technology,” said Goldman Sachs analyst Kash Langan.
Another analyst at a New York investment bank made headlines over the summer, arguing that AI doesn’t solve complex problems that are worth the cost. He also questioned whether AI models, even though they are trained on much of the textual and visual data generated throughout human history, will ever be able to do what humans do so well. I also had doubts. Langan takes a more optimistic view.
“We had a strong interest in this technology that it was going to be absolutely game-changing. We’ve never seen anything like this in the two years since we introduced ChatGPT,” Langan said. Ta. “It’s more expensive than I expected, and it’s not as productive as I expected.”
But Langan remains bullish about its potential, saying AI tools have already been proven to “really increase productivity” in sales, design, and many other professions.
AI and your work
Some workers wonder whether AI tools will be used for purposes such as: supplement their work Or you can replace them as technology grows. Technology company Borderless AI uses Cohere’s AI chatbot to draft employment contracts for workers in Turkey and India without the help of outside lawyers or translators.
video game performer Along with the Screen Actors Guild and American Federation of Television and Radio Artists, which went on strike in July, AI could be used to: not reduce or eliminate job opportunities; He said he was concerned about this. reproduce one performance Participated in many other movements without their consent. concerns about How do movie studios use Fueled by AI The union’s film and television strike last year lasted four months. Gaming companies have also entered into side agreements with unions codifying certain AI protections to continue working with stakeholders during the strike.
Musicians and writers have expressed similar concerns about AI scraping their voices and books. But Walid Saad, a professor of electrical and computer engineering at Virginia Tech and an AI expert, said generative AI still cannot create original works or “something completely new.”
“You have more information because you can train with more data. But having more information doesn’t mean you’re more creative,” he says. I did. “As humans, we understand the world around us, right? We understand physics. If you throw a ball on the ground, you know it’s going to bounce. I don’t understand.”
Saad cited memes about AI as an example of its shortcomings. When someone told the AI engine to create an image of a salmon swimming in a river, it created a picture of a river with salmon fillets found at the grocery store, he said.
“What AI currently lacks is common sense that humans have, and I think that’s the next step,” he said.
“The future of agents”
Bijoy Pandey, senior vice president of Outshift, Cisco’s innovation and incubation division, said such inferences are an important part of the process of making AI tools more useful to consumers. AI developers are increasingly touting the next wave of generative AI chatbots as AI “agents” that can do more helpful things on people’s behalf.
That might mean asking AI agents vague questions and allowing the model to reason and plan steps to solve ambitious problems, Pandey said. He says many technologies will move in that direction by 2025.
Ultimately, Pandey said, AI agents will be able to come together to perform tasks in the same way that people come together to solve problems as a team, rather than simply performing tasks as individual AI tools. I predict it will be like this. He said future AI agents will work as an ensemble.
For example, future Bitcoin software will likely rely on the use of AI software agents, Pandey said. He said each of these agents has a specialty: “Some agents check accuracy, some agents check security, and some agents check scale.”
“We are getting closer to the future of agents,” he said. “All of these agents are very good at a particular skill, but they also have a little bit of personality and color, because that’s the way we work.”
AI benefits healthcare
AI tools have streamlined, and in some cases literally helped, the medical field. This year’s Nobel Prize in Chemistry – one of two Nobel Prize winners Awarded for AI-related science — Worked on work led by Google that could help discover new drugs.
Virginia Tech’s Saad said AI is helping speed diagnosis by giving doctors a quick starting point when making patient care decisions. AI can’t detect disease, but it can, he said. Digest data quickly It then points out potential problem areas for a real doctor to investigate. However, as in other fields, there is a risk of perpetuating falsehoods.
For example, tech giant OpenAI touts its AI-powered transcription tool, Whisper, as approaching “human-level robustness and accuracy.” But experts say Whisper has major flaws. Text clumps are likely to occur Or even whole sentences.
Cisco’s Pandey said that some of his company’s customers in the pharmaceutical industry are using AI to bridge the gap between “wet labs,” where humans perform physical experiments and research, and “dry labs,” where data is analyzed. He said that he found it useful. Computers are often used for modeling.
When it comes to drug development, the collaborative process can take years, but AI can shorten that process to days, he said.
“For me, this was the most dramatic use,” Pandey said.