openai

Demo of All ChatGPT Plugins Available in April 2023

ChatGPT plugins are an amazing new development from OpenAI. They will dramatically expand ChatGPT’s capabilities rapidly. If you’re curious what it is like to use the first batch of plugins, watch this video: Approximate Transcript: Hi, in this video, I want to go deeper into the plugins I shot another chat GPT plugins video, I didn’t go over all of them, I’m going to try to cover as many of them as I can. Many of my voice can hold up, I might pause to talk for a little bit because I do still have COVID. And, but also some of the plugins I noticed actually were working that weren’t working before specifically the Zapier plugin, and I’m going to show you as many of them as I can. So this one just didn’t couldn’t even figure out how that would work. Let’s go back, let’s do Zapier, we’re going to just uninstall everything so I can show you kind of from scratch. So he wants me to connect. Let me pull this window over. So it looks like I already have a bunch of actions. And that I have on here. But what we can do is let’s just add some new ones. And so right now, there’s still only six actions. I won’t show you all of them. But what I did before was these things didn’t work. I’m not gonna do the slack or the the Gmail was to do the Google Sheets one. I think that’s the more useful one. It still wasn’t working exactly correctly. And I’ll show you what I mean in a minute. Enable action. Okay, so we got that one. Let’s also do the lookup just have a I guess all these Okay, so this should be working. And we’re just gonna close this now. Alright, so now we’ve got it enabled here. And I have a sheet when we pull it up. Okay, so here’s the sheet, Yoda isms. And what it what seems to do is it’ll repeat itself when it’s not supposed to. That’s when it lastly, the last time I did it, which was about a week ago, the first time I did it, it wouldn’t do anything. So what we’re going to do is we’re going to say, so we have wraps is the the I think they call us the sheet. And this is the worksheet, I don’t remember exactly how they name it. But let’s say please add, let’s see, ice cream, toast and pickles to kind of want to test see if it will do it on row six. And or it’ll just do the next empty row to row six. Actually, here, let’s do row six in cells in columns A, B, and C respectively. Your pause while I finish typing this. So actually, I’m going to copy this. And I’m actually going to see if I need this last part because I believe it actually does need that last part. So let’s try this. Let’s see what it does. So I was not able to retrieve the information when I did it last time as well. Looks like I need to specify the worksheet. Okay, so you do need to specify that. All right, stop generating. Let’s try this again. See, even though there’s only one sheet within the Google Sheet or I’ll pause I think it’s going to ask for confirmation. It’s going to show me a preview of the action. Maybe a safety measure. So it’s weird Ice cream, ice cream, pickles. I could have edited it. But I just wanted to see if I just did it raw. And it didn’t do it on the correct row. It just did the next empty row which is interesting. So it’s better. But because in the past it did. I did three things as well. Excuse me, and it it it did one in all three. It got a little closer this time. But let’s see what let’s try this. Now let’s try I Okay, here’s the next next request let’s see if it can retrieve this information due to a variety of special rule being empty or the specific columns not containing any data yeah that’s not what we’re looking for so it doesn’t it isn’t able to isn’t able to actually pull the information so it’s still not working but again this they haven’t released this publicly yet so I’m sure if they’re still testing and it’s already improved several times as I tried it let’s let’s go into some different plugins the Wolfram Alpha one I did show that one successfully that that’s probably the best one so far that I’ve found let’s let’s just try this shot one. I can’t remember what what happened with this one. It’s too sharp turn off Zapier. Let’s try this I will show you how I was doing this it’s really interesting how it gives you sort of the some of the context there I don’t know how useful this is to be honest, because I wonder if it’s like if it’s actually finding like the best price is $109 million. What made them pick this this site? Over Amazon? Okay, $540 million All right. Maybe it’s like a classic shoe or maybe maybe they actually did find like the best one. Alright here’s, I guess this is eBay newish. Me It’s kind of cool. I like this, this image down here. It’s pretty, that’s pretty neat. I don’t see. I don’t see myself shopping this way. It feels like it still needs some more comparison aspects like maybe they pick a shoe. And they find several places where you can buy that shoe and give you several options as opposed to just one option. But it’s interesting. Oh, it’s so running in pause, not sure why this is still running like this because I did some other stuff for a few minutes. And it just seems like it’s done.

Demo of All ChatGPT Plugins Available in April 2023 Read More »

The Implications of Large Language Models (LLMs) Hitting the Wall

Recently, Sam Altman said, “The Age of Giant AI Models is Over.” What he meant by that was, “our strategy to improve AI models by making them much bigger is providing diminishing returns.” So, I thought it would be interesting to explore if LLM’s hit the wall and have improvements dramatically slowed. Approximate Transcript: Hi, this video is about large language models MLMs hitting the wall and the implications of that. In case you haven’t heard, I shot a separate video about this. But Sam Altman recently stated that the age of giant models is over, which I think is a bit misleading. Basically, what he was saying was, you can’t improve any more by just adding more data and more parameters. And this makes sense. And this is something that some people have predicted was coming, because GPT, four just capture so much of the data. They didn’t release it. But if you look at like the GPT, two had 1.5 billion parameters, which is sort of like the amount of neurons or the amount of different kinds of factors that it considers GPT. Three had 1.7 170 5 billion. We don’t know how many GPT. Four had has, they didn’t release that. But estimates are that it’s a leap over GPT. Three. And that also, that potentially, they’re kind of out of data. Now more data is being created every day. So it’s really they’re out of data completely, but perhaps just there’s not enough to get like that exponential leap. But also, I think he implied and this makes sense that sometimes more data just isn’t necessarily better, doesn’t necessarily give you an a better answer to get more data. And I elaborate on that again, in my in my recent other video. So you know, let’s assume for the sake of argument that that large language models and opening I included, hit a huge wall, and they are maybe not unable to move forward, but their progress has slowed dramatically. And we don’t see anything like what people think maybe GPT, five should be for five or 10 years, that maybe there’s another technological development that needs to happen. So what comes about because of this, let’s look at the good. I think probably the biggest thing is for the world to kind of catch up mentally on unlike, you know, especially when it comes to misinformation being spread, and identifying that and helping people adjust to that new reality that we’re finding ourselves in right now, this year 2023, that’s probably the only good thing I can think of maybe the pause, the people who were in favor of a pause is just kind of happens naturally. I personally don’t think that the pause is a good idea. And you know, there’s three dots here, because I don’t really see a whole lot of good coming from this, I’m sure that there’s plenty of people that will be celebrating this, if this is the case, I will not be one of them. The bad, here’s here’s what I would say with the bad good tech is slow down, there’s a lot of really good use cases that really dramatically can help people’s lives that is coming about because of the AI models. And now maybe this in some cases, this doesn’t affect that in some cases, it likely will. So you know, just to give an example, there’s a bunch of different stuff with regards to health care, you know, saving lives, curing diseases that that AI is actually has already shown to be quite proficient at and moving forward rapidly. So perhaps that slows down to me, that’s bad. I think there’s also an argument to be made for this could actually be better for bad actors. And the reason for that is that I think that opening I’m moving forward will actually help tamp down the bad AI models, as they have demonstrated to me pretty thoroughly, that they do have good intentions. And that if there was a bad model that that GPT for GPT, five could help kind of tamp down, identify, fight back against that they would work on that and help with that. And so I think that this actually opens the door for bad actors. And it’ll it’ll make sense when I get to this last bullet point. Let’s look at like, kind of, like how good is GPT for right now. And I would say that it’s really freakin good. Like, I was trying to test the other day like, you know, it’s supposed to be bad at math. And it actually did a pretty good job of math and showing its work. And it got it right. Not like a super complicated thing. But more complicated than what you know, other people were saying it was, it was it was wrong. And I need to add the hallucinations here. So but there are still some things that it struggles with math, as we mentioned before recent events, hallucinations, I think that there’s some more if you want put put them in the in the comments below if you have any other ideas, but it still struggles with some things, but not a whole lot. It does a whole lot really, really, really well. So you know, I think right now, it’s actually at a point that is pretty profound, just GPT four as it is now. Now. So Sam Altman did state that there are other ways in which they are looking to improve it, and I believe I believe them. And but maybe it’s just slower. Let’s assume for the case of this argument that it’s slower. It’s just kind of more minor updates that come together more further down the line in terms of years to create a more complex hints of bigger change, which is kind of what they said, they did say that a lot of their improvements, were just a

The Implications of Large Language Models (LLMs) Hitting the Wall Read More »

“The Age of Giant AI Models is Already Over” says Sam Altman, CEO of OpenAI

This statement by Sam Altman is provocative… …there seems to be an implication that giant AI models are no longer useful… …but this is not what Sam means. Approximate Transcript: Hi, this video is about something that sounds really profound that Sam Altman said, recently, the open ai ai CEO, he said that the age of giant AI models is already over. I think this statement is taken out of context is a bit misleading, because to me, and I saw a smaller, kind of a smaller headline that I clicked on that made it seem even more salacious is kind of is he saying that it’s just like, chat tivities done? Like, it’s not good anymore? That’s not what he’s saying. That’s kind of what I would my first reading of it. It’s like, oh, we’re not going to use them anymore. No, he’s, they’re going to use the large language models. What he really means by this is, is that they can’t make they can’t really grow the improvement of them by making them bigger. That’s, that’s the short answer, there’s a little bit more context I want to add as well, which is that this is this has been the philosophy of open AI, from the beginning. And for quite some time, there’s a, you know, there. ndarray, Carpathia, very famous in the AI world, I believe he was the head of AI at Tesla. And then I think he’s actually at open AI. And now I remember, I’ve watched several of his videos, and one of the things that he talked about, was that, number one, the code for these AI models over the last basically, since 2017, when Google released their transformers paper, the code is very short. And it really hasn’t changed a whole lot. It’s like, I think 500 lines, which for code is very, very small. And then he talks about sort of like the, I believe it was him that the the strategy, the way to improve it is just make it bigger, you know, just keep making it bigger, add more parameters. And parameters are sort of like neurons. And to give context that they show it here in this article of GPT. Two had 1.5 billion parameters. This is funny tag line to be generated by artificial intelligence, I wonder if this is like an AI movie, or series about AI? Anyway, 1.5 billion, and then GPT, three 1.7 5 billion parameters, and it made it way, way better. And that was a large reason for the improvement. And then GPT, four, they didn’t announce how many parameters there already it but it’s supposed to be much bigger. And so what he’s saying is by adding more parameters or neurons, it’s not going to improve the model, there’s diminishing returns in that area. And up to some point, this is going to not give you more, I think another way of looking at this is also more data, it doesn’t necessarily add improvements to the quality of, of the of the model, but just in general, from a standpoint of like data analysis, more data isn’t always better, doesn’t always improve things. And, you know, just real quick aside, if you think well, why should I believe you about data, basically, for the last 20 years, data has, I’ve done data from a theoretical and from a practical standpoint, you know, I have a master’s degree in Industrial Engineering, which is closer to actually data science than it is than it is engineering. And it’s worth a lot of lots of statistics and analysis of huge, weird datasets. And then, you know, I worked at a semiconductor factory where there was, there’s a lot of complicated data, you know, spreadsheets with 10s, of 1000s of rows and dozens of columns. And, and I’ve worked there for about six years. And then for the last 11 years, I’ve done SEO, which is another kind of like practical data analysis, this is very different than the semiconductor, but still, more data. So I’ve been studying data, it’s been my jam for a very long time. And it makes sense, sometimes more data doesn’t add a clearer picture to the situation. And so this in they have talked about actually, this shouldn’t come as a surprise, even though the headline is kind of like, whoa, this shouldn’t come as a surprise, because this has been talked about for a while that number one, they’re going to run out of data to crawl. And that’s not entirely accurate. Because more data is being created every day, more and more in that rate of increase, that rate of new data is increasing over time. But it certainly hasn’t been increasing at the rate at which they have increased their models. But additionally, it doesn’t necessarily help again, help kind of clarify the situation. I think I’ve got a reasonable analogy. It’s sort of like imagine you’re trying to draw like a 3d picture. And you put in your first button and you can only do with dots, you put it in with a handful dots. And you can see like that line of, you know, a guy on a motorcycle so you kind of know what it is. And then you put in a bunch more dots and you get a lot more clarity. You can see more here His facial expression, and you can see that he’s got like a bandage on his leg or whatever. And then you put in more dots, and you get a very clear picture. Now, when you add more dots to the, to the picture to the dataset, there’s no additional clarity, or it’s very minor, the clarity that is added to the situation. And I think this, this kind of metaphor works for, for the, how they’re dealing with the data and the parameters of, you know, GPT, four and beyond. Because, you know, it

“The Age of Giant AI Models is Already Over” says Sam Altman, CEO of OpenAI Read More »

OpenAI Not Working on GPT-5?

Sam Altman, CEO of OpenAI, made some interesting comments recently about GPT-5. It seems they are being interpreted heavily and it seems to me that some are reading a bit too much into the comments… …so, I decided to do my own reading into the comments lol. Approximate Transcription: Hi, this video is about what Sam Altman said about GPT. Five and what some of the reactions are to it. And then some interpretations on what what it really means for GPT five going forward. So there’s a video and I’ll put a link to the video in this tweet, and then an article. So you can read it all if you want. And watch this quick video where Sam Altman calls into this Lex Friedman’s event. And he says, We are not currently training GPT five, we’re working on doing more things with GPT four. So I watched another video where someone said that this means meant that they’re not working on GPT. Five, that’s not the same thing to me. Because you can work on the algorithm or the model without training it. I think although the the logic, I know that the code for it is supposed to be pretty simple. So maybe there’s not a lot of work to be done there. Or maybe they are kind of working on it by also working on GPD. Four by solving things with GPT. Four, those are solutions that they can take and apply to GPT. Five, I don’t think this pushes the timeline out. I don’t think this is should be interpreted as they’re they’re trying to put a pause on things to let you know to hear keyed the call of those other people from about a month ago. And actually Lex Friedman. I’m sorry. Sam Altman comments on that in this video, he says basically something to the effect of like, hey, they have some valid points. But then there’s some other things like that he thinks that the it’s technically not not not very accurate. I still think we’re on track for like a GPT 4.5, excuse me, sort of COVID GPT 4.5. In in may be late this year, early next year, and then GPT, five, maybe about two years out. That’s based upon just the history obviously, that’s that’s a wild guess. But some people seem to also be interpreting this as either a lie, I don’t think it’s a lie, have found Sam Altman to be extremely straightforward in every single thing. And I’ve watched a lot of his stuff in terms of, you know, just telling it like he sees it and calling it like he sees it and not he’s, you know, not he’s kind of political about it. But he’s not afraid to say I disagree with someone, or this is what we want to do. And so, you know, I think that this isn’t him trying to be like, Oh, we’re not really working on it when they are. But the fact that they’re not training it is isn’t too surprising. It isn’t like big news, and I don’t really doubt him. I’m genuinely curious. What do you think? Do you think he’s, look at this poll right here on Twitter that a lot of people doubt him? Do you doubt him? Do you think it’s true? What do you think in terms of what this means? Because I’m very curious what you what you have to say. Well, anyway, this is a quick update. If you liked this video, like and subscribe for more awesome AI videos. Thanks. Have a great day. Bye.

OpenAI Not Working on GPT-5? Read More »

Edit Template
Scroll to Top