include_once('1'); ChatGPT for Enterprise: Can We Make It Tell the Truth? – AM-RA-STORES
Welcome to Am-ra-stores.co.uk!

ChatGPT for Enterprise: Can We Make It Tell the Truth?

[ad_1]

//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>

Generative AI models like ChatGPT can produce extremely convincing prose, completing complicated tasks that would previously have required a human writer. This should be a powerful tool for businesses everywhere, except that while ChatGPT results are convincing, the models are prone to “hallucinations:” They make things up.

Being unable to rely on answers to provide ground truth has so far limited generative AI’s usefulness in the enterprise, Marshall Choy, senior VP of product at SambaNova, told EE Times.

“Enterprises have this mountain of data, structured and unstructured,” he said. “They’re trying to figure out how to get to the insights that are trapped, particularly in all the unstructured data.”

The success of ChatGPT has shown the enterprise world what’s possible, but while hallucinations don’t matter when we’re playing around with ChatGPT for fun, the models need “hardening up” before they can be used in the enterprise for business-focused use cases, Choy said.

This means we’re stuck at the augmentation stage, rather than going for full automation today.

“What we’re finding, especially in enterprise, is that this is very much augmentation with reinforcement learning and human feedback in the loop,” he said.  “The reality is, the higher the cost of a mistake, the less willing we are to fully automate.”

Generative AIs like ChatGPT hold huge potential if we can be sure they are telling the truth.

Domain specific

SambaNova has trained a collection of open-source generative AI models, including GPT, Bloom and StableDiffusion, that are optimized for domain-specific enterprise use, whether that’s in finance, legal, healthcare or the public sector. Use cases in these fields might including analysis of contact center interactions or understanding text in large volumes of documents.

“Open-source models will become the standard… all the best models in the world will come to open-source,” Anton McGonnell, senior director of product at SambaNova, told EE Times. “Our thesis is that the winners are the platforms that will be able to host the complexity, to actually be able to run these models efficiently at scale and have velocity, because the state of the art [in models] is going to change so much.”

But how do we make generative AI models like GPT tell the truth?

Ensuring the maximum accuracy for generative AI is largely dependent on specialization—a combination of training models with domain-specific data, then fine-tuning for specific tasks with a company’s own data using a front-end like SambaNova Suite, McGonnell said.

“What the chatbots of the world are doing is trying to have a single model that can understand all the information in all domains, but by widening that search space, you are inevitably making the model a jack of all trades, but master of none,” he said. “Our focus is making a model that is really specialized, that’s the whole value proposition as far as we are concerned.”

Explainability

However, specialization alone isn’t always enough to meet enterprises’ needs for ground truth.

SambaNova is currently training a version of GPT that can cite its sources—so the user can easily tell whether the model is telling the truth. An earlier version of the model demonstrated to EE Times could link the model’s generated text to the documents information was sourced from, giving the user a level of confidence that the model is in fact producing ground truth.

Marshall Choy
Marshall Choy (Source: SambaNova)

The need for ground truth is the difference between consumer and enterprise applications, Choy said.

“You have to have the citation to avoid copyright infringement and get a view of the true source of things; your source can’t be ChatGPT,” he said. “ChatGPT was developed to provide answers in a tone that is convincing, even when it’s not right…. Enterprises need citations, they need explainability about the source of the information. Otherwise, you could be unintentionally plagiarizing or infringing on somebody’s copyrights as well as just spreading misinformation.”

Other ideas for making ChatGPT-like models more suitable for enterprises might include simply answering that the model doesn’t know the answer, rather than making something up, he added.

SambaNova is also addressing enterprise concerns like security and privacy: Its DataScale hardware systems can be deployed behind a customer’s firewall to be compliant with the company’s own security standards.

“We don’t use shared deployments or shared endpoints like a consumer company would,” Choy said. “We’re enabling [enterprises] to have their own dedicated backbone so there’s no IP leakage or contamination risk. And open standards—it’s about flexibility and choice—they can take our open-source model and have it their way…. We’re letting customers use their training data, but they retain ownership of the model and their data, unlike if they went to a consumer-grade, generative AI vendor in the cloud.”



[ad_2]

We will be happy to hear your thoughts

Leave a reply

AM-RA-STORES
Logo
Compare items
  • Total (0)
Compare
0