Snowflake CEO explains why ‘the insidious thing’ about AI hallucinations isn’t the occasional error



  • Snowflake CEO Sridhar Ramaswamy said tech firms lack transparancy about their AI hallucination rates.
  • The issue isn’t occasional errors — it’s not knowing which part of the AI’s answer is off, he said.

AI may seem like it has all the answers, and yet it is prone to giving completely fictitious answers. Snowflake CEO Sridhar Ramaswamy said AI companies would benefit from making it clear just how often that happens.

In a recent episode of “The Logan Bartlett Show,” the former Google executive said that tech firms should be more transparent about AI hallucination rates.

“If you look, no one publishes hallucination rates on these on their models or on their solution,” Ramaswamy said. “It’s like, ‘Look, we’re so cool, you should just use us.'”

Modern LLMs can hallucinate at a rate of anywhere from 1% to almost 30%, according to third-party estimates.

“I don’t think the kind of ‘AI industry,’ if there is such a term, does itself any favors by simply not talking about things like hallucination rates,” Ramaswamy said.

Some tech moguls have defended AI hallucinations, including OpenAI CEO Sam Altman, who said that AI models that only answered when absolutely certain would lose their “magic.”

“If you just do the naive thing and say ‘never say anything that you’re not 100% sure about’, you can get them all to do that,” Altman said during an interview in 2023. “But it won’t have the magic that people like so much.”

Anthropic cofounder Jared Kaplan said earlier this year that the end goal is AI models that don’t hallucinate, but occasional chatbot errors are a necessary “tradeoff” for users.

“These systems — if you train them to never hallucinate — they will become very, very worried about making mistakes and they will say, ‘I don’t know the context’ to everything,” he said. It’s up to developers to determine when occasional errors are acceptable in an AI product.

AI hallucinations have gotten some tech giants into sticky situations, such as last year, when OpenAI was sued by a radio host for generating a false legal complaint.

Ramaswamy said that, especially for “critical applications” like analyzing a company’s financial data, AI tools “can’t make mistakes.”

“The insidious thing about hallucinations is not that the model is getting 5% of the answers wrong, it’s that you don’t know which 5% is wrong, and that’s like a trust issue,” the Snowflake CEO said.

Baris Gultekin, Snowflake’s head of AI, told Business Insider in a statement that AI hallucinations are the “biggest blocker” for generative AI to front-end users.

“Right now, a lot of generative AI is being deployed for internal use cases only, because it’s still challenging for organizations to control exactly what the model is going to say and to ensure that the results are accurate,” he said.

However, Gultekin said AI accuracy will improve, with companies now able to put guardrails on the models’ output to restrict what AI can say, what tones are allowed, and other factors.

“Models increasingly understand these guardrails, and they can be tuned to protect against things like bias,” he said.

With more access to diverse data and sources, Gultekin said that AI will increasingly become more accurate and much of the “backlash” will be “mitigated one successful use case at a time.”

Snowflake’s CEO said that while he would want 100% accuracy for certain AIs like financial chatbots, there are other cases where users would “happily accept a certain amount of mistakes.”

For example, Ramaswamy said that people send him a lot of articles to read. When he inputs them into a chatbot like Claude or ChatGPT, he doesn’t mind if the summary isn’t always exactly right.

“It’s not a huge deal because it’s just the time saving that I get from not actually having to crank through the article — there’s so much value,” he said.



Source link

Leave a Reply