How to get better result from ChatGPT, Bing and Google Bard?

0

Artificial Intelligence (AI)-powered chatbots such as ChatGPT, Bing and Google Bard are gradually moving ahead in certainly having a moment towards the next generation of conversational software tools that promises doing everything from taking over web searched to producing an endless supply of creative literatures to remembering all the world’s knowledge thus easing humans from the pain of remembering or memorizing everything they would need for their professional purposes. ChatGPT and Google Bard and few more chatbots are already having a large language model or LLMs.

A lot of AI systems designed to recognize people’s voice or generate animal pictures – LLMs are backed by huge volume of data. Companies behind these chatbots have been rather circumspect when it comes to revealing where exactly that data comes from.

According to a research paper by the Cornell University introducing the LaMDA (Language Model for Dialogue Applications), which Google Bard is built on, refers Wikipedia – for example as “public forums” and “code documents from sites related to programing like Q&A sites, tutorials etcetera”.

Meanwhile Reddit wants to start charging for access to its 18 years of text conversations and StackOverflow has also announced plans to start charging as well. The implication here is that LLMs have been making extensive use of both sites up until this point as sources, entirely for free and on the backs of the people who built and used those resources. It’s clear that a lot of what’s publicly available on the web has been scraped and analyzed by LLMs.

All of this text data, wherever it comes from, is processed through a neural network, a commonly used type of AI engine made up of multiple nodes and layers. These networks continually adjust the way they interpret and make sense of data based on a host of factors, including the results of previous trial and error. Most LLMs use a specific neural network architecture called a transformer, which has some tricks particularly suited to language processing (GPT after Chat stands for Generative Pretrained Transformer).

Specifically, a transformer can read vast amounts of text, spot patterns in how words and phrases relate to each other, and then make predictions about what words should come next. You may have heard LLMs being compared to supercharged autocorrect engines, and that’s actually not too far off the mark: ChatGPT and Bard don’t really “know” anything, but they are very good at figuring out which word follows another, which starts to look like real thought and creativity when it gets to an advanced enough stage.

One of the key innovations of these transformers is the self-attention mechanism. It’s difficult to explain in a paragraph, but in essence it means words in a sentence aren’t considered in isolation, but also in relation to each other in a variety of sophisticated ways. It allows for a greater level of comprehension than would otherwise be possible.

There is some randomness and variation built into the code, which is why you won’t get the same response from a transformer chatbot every time. This autocorrect idea also explains how errors can creep in. On a fundamental level, ChatGPT and Google Bard don’t know what’s accurate and what isn’t. They’re looking for responses that seem plausible and natural, and that match up with the data they’ve been trained on.

So, for example, a bot might not always choose the most likely word that comes next, but the second- or third-most likely. Push this too far, though, and the sentences stop making sense, which is why LLMs are in a constant state of self-analysis and self-correction. Part of a response is of course down to the input, which is why you can ask these chatbots to simplify their responses or make them more complex.

LEAVE A REPLY

Please enter your comment!
Please enter your name here