Some observers are announcing a colossal number of parameters and an imminent release for GPT-4, but the OpenAI troops remain patient.
At the moment, for those interested in new technologies, it is almost impossible to spend an entire day without hearing about ChatGPT. OpenAI’s amazing machine learning-powered chatbot never ceases to impress. This algorithm capable of offering an opinion on a great very French debate, of helping students to cheat or even of participating in the fight against Alzheimer’s disease already shows monstrous potential, and this is only the beginning.
As the name suggests, ChatGPT is based on GTP. It is a text generation model that has already served as a reference in this field for some time, long before the arrival of the chatbot. Currently, this model is at version 3 — or more precisely, GPT-3.5 since last December.
It is partly thanks to the new features included in this new update that the chatbot displays such impressive performance. You only have to take a look at the millions of examples that already abound on the Web to understand that the arrival of version 3.5 represented a major evolution in the world of text generation. But the most impressive thing is that this is only a taste of what awaits us. Because GPT will go up a gear again with the release of GPT-4.
A new model with even greater potential
On this occasion, some observers expect an explosion in the number of parameters. In the context of machine learning, this term designates internal variables, not explicitly defined by the developers, whose value the model must estimate from the data ingested. On paper, the higher the number of parameters, the more the artificial neural network can be representative of the data used to train it.
GPT-4 is going to launch soon.
And it will make ChatGPT look like a toy…→ GPT-3 has 175 billion parameters
→ GPT-4 has 100 trillion parametersI think we’re gonna see something absolutely mindblowing this time!
And the best part? 👇 pic.twitter.com/FAB5gFjveb
— Simon Hoiberg (@SimonHoiberg) January 11, 2023
Very vulgarly, increasing the number of parameters available more or less amounts to increasing the “power” of the algorithm, ie its ability to produce consistent results in increasingly complex cases. This is something that has already been verified so far; on this criterion in particular, GPT has already made giant leaps. It went from about 1.5 billion of parameters with the rudimentary GPT-1 to 175 billion of settings with its impressive GPT-3.5.
Some believe that another explosion in the number of parameters is to be expected in the next version. In an interview with Wired, the CEO of an OpenAI partner suggested that GPT-4 could achieve the 100 trillion parameters. In other words, a ChatGPT equipped with this new version should offer even finer, nuanced and precise answers than in the past.
The central question of the number of parameters
But as it stands, we still have to be careful with these announcements. Sam Altman, CEO and co-founder of OpenAI, spoke differently. According to The Decoder, at a talk dubbed LessWrong, he insisted it was a long-term goal, and absolutely not a priority goal of GPT-4.
Moreover, the firm has no not necessarily of interest to focus on the number of parameters. For reference, it’s kind of the same problem that we find in the hardware world. Increasing the rate of a processor’s cores increases computing power; but it is also possible to focus on improving the architecture to progress at this level. In other words, it’s not just size that matters!
To stay on the example of the CPU, in most cases, increasing the frequency and the number of cores also implies blowing up the amount of energy required. And, by extension, the thermal constraints, and the cost of the operation. By analogy, This observation also applies to neural networks., even if it translates in a different way. If you increase the number of parameters in a model, you run the risk of a huge increase in the computing power needed to run the model. With all that this implies at the operational level.
And there are still other potential pitfalls, more specific to machine learning. For example, we can cite the problem of overlearning, or overfitting in English. Very briefly, this term designates a situation where the neural network with a lot of parameters has too much worked well from the data that was used to train it. In this case, the model therefore offers consistent results on this material in particular. But he encounters difficulties in extrapolating his conclusions to exploit new data. Rather annoying, knowing that this is precisely the purpose of training.
In short, it is better take these famous 100,000 billion parameters with tweezers. To see more clearly, we will have to wait for the wizards of OpenAI to deliver a complete technical sheet and an official release date.
Huge stakes and welcome precautions
The latter remains just as mysterious as the number of parameters. There is a glaring gap between the enthusiasm of some observers and the relative reluctance of Sam Altman. The former regularly suggest that the release of GPT-4 is not far away. The CEO, on the other hand, systematically refuses to announce any specific deadline. Moreover, he repeated several times that he had no qualms about make your audience wait a very long time in order to offer a successful product (see this article from The Decoder).

Certainly, there is something to be impatient to discover this new version. But this methodical caution posted by Altman is a great thing. Everyone has seen the phenomenal impact ChatGPT has had in just a few weeks. We can therefore consider ourselves lucky that OpenAI takes its time to install safeguards before making an even more powerful version available. Because in case of precipitation, its impact on society could be considerable, whether in terms of disinformation, intellectual property, education, in the academic world…
With ChatGPT, OpenAI is well aware of having opened a veritable first-class Pandora’s box. We can therefore be satisfied with the fact that Sam Altman has his head on his shoulders. He remains a big proponent of the precautionary principle, and it’s rare enough to be highlighted in companies funded by Elon Musk. Let’s hope he will maintain his course of action against the sulphurous billionaire fond of all-out innovation. This may allow OpenAI to continue this great generational transition without tripping over the carpet.