This statement by Sam Altman is provocative… …there seems to be an implication that giant AI models are no longer useful… …but this is not what Sam means. Approximate Transcript: Hi, this video is about something that sounds really profound that Sam Altman said, recently, the open ai ai CEO, he said that the age of giant AI models is already over. I think this statement is taken out of context is a bit misleading, because to me, and I saw a smaller, kind of a smaller headline that I clicked on that made it seem even more salacious is kind of is he saying that it’s just like, chat tivities done? Like, it’s not good anymore? That’s not what he’s saying. That’s kind of what I would my first reading of it. It’s like, oh, we’re not going to use them anymore. No, he’s, they’re going to use the large language models. What he really means by this is, is that they can’t make they can’t really grow the improvement of them by making them bigger. That’s, that’s the short answer, there’s a little bit more context I want to add as well, which is that this is this has been the philosophy of open AI, from the beginning. And for quite some time, there’s a, you know, there. ndarray, Carpathia, very famous in the AI world, I believe he was the head of AI at Tesla. And then I think he’s actually at open AI. And now I remember, I’ve watched several of his videos, and one of the things that he talked about, was that, number one, the code for these AI models over the last basically, since 2017, when Google released their transformers paper, the code is very short. And it really hasn’t changed a whole lot. It’s like, I think 500 lines, which for code is very, very small. And then he talks about sort of like the, I believe it was him that the the strategy, the way to improve it is just make it bigger, you know, just keep making it bigger, add more parameters. And parameters are sort of like neurons. And to give context that they show it here in this article of GPT. Two had 1.5 billion parameters. This is funny tag line to be generated by artificial intelligence, I wonder if this is like an AI movie, or series about AI? Anyway, 1.5 billion, and then GPT, three 1.7 5 billion parameters, and it made it way, way better. And that was a large reason for the improvement. And then GPT, four, they didn’t announce how many parameters there already it but it’s supposed to be much bigger. And so what he’s saying is by adding more parameters or neurons, it’s not going to improve the model, there’s diminishing returns in that area. And up to some point, this is going to not give you more, I think another way of looking at this is also more data, it doesn’t necessarily add improvements to the quality of, of the of the model, but just in general, from a standpoint of like data analysis, more data isn’t always better, doesn’t always improve things. And, you know, just real quick aside, if you think well, why should I believe you about data, basically, for the last 20 years, data has, I’ve done data from a theoretical and from a practical standpoint, you know, I have a master’s degree in Industrial Engineering, which is closer to actually data science than it is than it is engineering. And it’s worth a lot of lots of statistics and analysis of huge, weird datasets. And then, you know, I worked at a semiconductor factory where there was, there’s a lot of complicated data, you know, spreadsheets with 10s, of 1000s of rows and dozens of columns. And, and I’ve worked there for about six years. And then for the last 11 years, I’ve done SEO, which is another kind of like practical data analysis, this is very different than the semiconductor, but still, more data. So I’ve been studying data, it’s been my jam for a very long time. And it makes sense, sometimes more data doesn’t add a clearer picture to the situation. And so this in they have talked about actually, this shouldn’t come as a surprise, even though the headline is kind of like, whoa, this shouldn’t come as a surprise, because this has been talked about for a while that number one, they’re going to run out of data to crawl. And that’s not entirely accurate. Because more data is being created every day, more and more in that rate of increase, that rate of new data is increasing over time. But it certainly hasn’t been increasing at the rate at which they have increased their models. But additionally, it doesn’t necessarily help again, help kind of clarify the situation. I think I’ve got a reasonable analogy. It’s sort of like imagine you’re trying to draw like a 3d picture. And you put in your first button and you can only do with dots, you put it in with a handful dots. And you can see like that line of, you know, a guy on a motorcycle so you kind of know what it is. And then you put in a bunch more dots and you get a lot more clarity. You can see more here His facial expression, and you can see that he’s got like a bandage on his leg or whatever. And then you put in more dots, and you get a very clear picture. Now, when you add more dots to the, to the picture to the dataset, there’s no additional clarity, or it’s very minor, the clarity that is added to the situation. And I think this, this kind of metaphor works for, for the, how they’re dealing with the data and the parameters of, you know, GPT, four and beyond. Because, you know, it