Researchers have developed an Artificial Intelligence (AI) chatbot called ChatGPT which can answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.
This new chatbot from OpenAI can bring about everything from the “death of the school essay” to the dawn of a new age of communication. quickly summarizes an answer into a grammatically correct and punctuated paragraph. It also writes code. Within two weeks of its Nov. 30 launch, millions of users were trying out the large language-model artificial intelligence app. In fact, it was getting so much attention, the system periodically exceeded its user capacity.
When asked how it works, it replied, “As a large language model trained by OpenAI, I generate responses to text-based queries based on the vast amount of text data that I have been trained on. I do not have the ability to access external sources of information or interact with the internet, so all of the information that I provide is derived from the text data that I have been trained on.”
This might sound amazing but the system does not “think” and is incapable of coming up with original ideas. It works by closely mimicking human language, packing the potential to make writing tasks quicker and easier in a way never seen before. What makes ChatGPT so captivating is its seamless use of human language. Professor Tinglong Dai, at Johns Hopkins Carey Business School and an AI expert said “It’s also very deceptive in the sense that it is incapable of telling whether what it writes is accurate, in fact, just based on my own extensive testing, I found that it makes tons of factual mistakes, but it does so in a confident, authoritative, people kind of a way.”
Data scientists noticed that ChatGPT’s answers seem to become increasing repetitive and even “defensive” when asked the same question over and over again. ChatGPT’s answers also vary depending on the language used to ask the question, because its answers will reflect the language of the source material ChatGPT draws from to formulate its response.
“This tool could pose a severe challenge to democracy, because it means that the cost of creating misinformation would become insanely low, such that it’s going to be nearly impossible for people to detect AI-created content,” Dai said. “Say that you can even make AI more authentic by inserting typos and other errors and biases that make it seem even more authentic and personable. I think that’s the scariest part.”