The stars of the moment, in the spotlight of the new technological stage (or perhaps it would be better to say battlefield), are OpenAI with its ChatGPT-4, Google with Bard and Microsoft with BingAI and Copilot.
As you know, these are LLM (Large Language Model) programs, a language model made up of a neural network containing many parameters (we are talking about billions or more), trained on large amounts of text, which evolves making use of learning self-supervised.
These AI tools are autocomplete systems, trained to predict which word will follow next in a given sentence. They have no database of reference facts to draw upon, but have only developed the ability to write statements that are plausible to read. This means they can present false information as if it were truth, just because it sounds plausible.
Older attempts at developing AI assistants, such as Apple’s Siri and Amazon’s Alexa, haven’t achieved much, despite having had over a decade to develop and evolve (which is why they’re now mostly used to set alarms, play music, and input calendar events).
ChatGPT and Bard, on the other hand, recognize and generate text based on huge data sets coming directly from the web. They are therefore trained to produce sentences as if they were human and this is where their strength lies, which makes them much more versatile and suitable for use as effective assistants (and not only).
At this point, however, one wonders if this new method of research assisted by artificial intelligences could have the effect of unbalancing the network ecosystem. In fact, AI’s selection and extraction of information from the web without users clicking on the source damages the search paradigm that keeps many sites alive.
What do you think about it?