God, that was a bad read. Not only is this person woefully misinformed, they’re complaining about the state of discourse while directly contributing to the problem.
If you’re going to write about tech, at least take some time to have a pasaable understanding of it, not just “I use the product for shits and giggles occasionally.”
this person woefully misinformed
In what way, about what? Can you elaborate?
directly contributing to the problem
How so?
have a [passable] understanding of it
Why do you insinuate that they do not?
I’ll preface this by saying I’m not an expert, and I don’t like to speak authoritatively on things that I’m not an expert in, so it’s possible I’m mistaken. Also I’ve had a drink or two, so that’s not helping, but here we go anyways.
In the article, the author quips on a tweet where they seem to fundamentally misunderstand how LLMs work:
I tabbed over to another tab, and the top post on my Bluesky feed was something along these lines:
ChatGPT is not a search engine. It does not scan the web for information. You cannot use it as a search engine. LLMs only generate statistically likely sentences.
The thing is… ChatGPT was over there, in the other tab, searching the web. And the answer I got was pretty good.
The tweet is correct. The LLM has a snapshot understanding of the internet based on its training data. It’s not what we would generally consider a true index based search.
Training LLMs is a costly and time consuming process, so it’s fundamentally impossible to regenerate an LLM in the same order of magnitude of time it takes to make a simple index.
The author fails to address any of these issues, which suggests to me that they don’t know what they’re talking about.
I suppose I could conceded that an LLM can fulfill a similar role that a search engine traditionally has, but it’d kinda be like saying that a toaster is an oven. They’re both confined boxes which heat food, but good luck if you try to bake 2 pies at once in a toaster.
ChatGPT searches the web.
You can temporarily add context on top of the training data, it’s how you can import a document and have them read through it and output say an excel database based on a pdfs contents.
Appreciate the correction. Happen to know of any whitepapers or articles I could read on it?
Here’s the thing, I went out of my way to say I don’t know shit from bananas in this context, and I could very well be wrong. But the article certainly doesn’t sufficiently demonstrate why it’s right.
Most technical articles I click on go through step by step processes to show how they gained understanding of the subject material, and it’s layed out in a manner that less technical people can still follow. And the payoff is you come out with a feeling that you understand a little bit more than what you went in with.
This article is just full on “trust me bro”. I went in with a mediocre understanding, and came out about the same, but with a nasty taste in my mouth. Nothing of value was learned.
He didn’t write that to teach but to vent. The intended audience is people who already know.
For more information on ChatGPT’s current capabilities, consult the API docs. I found that to be the most concise source of reliable information. And under no circumstances, believe anything about AI that you read on Lemmy.
Kudos for being willing to learn.
but it doesn’t do that for an entire index. it can just skim a few exrra pages you’re currently chatting about. it will, for example, have trouble with latest news or finding the new domain of someones favorite piracy site, after the old one got shut down.
I think chat gpt does web searches now, maybe for the reasoning models. At least it looks like it’s doing that.
Most do
One doesn’t’t need to know how an engine works to know the Ford pinto was a disaster
One doesn’t need tknow how llms work to know they are pretty destructive and terrible
Nite I’m not going to argue this. It’s just how things are now, and no apologetics will change what it is.
Not only is Steve right that ChatGPT writes better than the average person (which is indeed an elitist asshole take), ChatGPT has better logical reasoning than the average lemmy commenter
https://chatgpt.com/share/6837f656-b1b0-8008-ac31-d8858bd17da5
I apologize that apparently Lemmy/Reddit people do not have enough self-awareness to accept good criticism, especially if it was just automatically generated and have downloaded that to oblivion. Though I don’t really think you should respond to comments with a chatGPT link, not exactly helpful. Comes off a tad bit AI Bro…
I 100% agree with the first point, but I’d make a slight correction to the second: it’s debatable whether an LLM can truly use what we call “logic,” but it’s undeniable that its output is far more logical than that of not only the average Lemmy user, but the vast majority of social media users in general.
What world are you living in?
Reality, where observation precedes perception
I think we should swap usernames.
Your username is ‘anus’, and you’re posting nonsensical blog slop on Lemmy.
That’s an interesting reality that I’m not sure many others participate in.
The irony of focusing on my username when logical coherence is in question
Well, hey, if I changed my name to “Dumbfuck Mc’Dipshitterson” I think I’d have a PR problem as well.
Oh no, not my public image!
An observation is perception(?).
Try asking ChatGPT if you’re confused
I’m making that statement. Sorry if it was unclear.
Dude. Go outside
This is an argument of semantics more than anything. Like asking if Linux has a GUI. Are they talking about the kernel or a distro? Are some people going to be really pedantic about it? Definitely.
An LLM is a fixed blob of binary data that can take inputs, do some statistical transformations, then produce an output. ChatGPT is an entire service or ecosystem built around LLMs. Can it search the web? Well, sure, they’ve built a solution around the model to allow it to do that. However if I were to run an LLM locally on my own PC, it doesn’t necessarily have the tooling programmed around it to allow for something like that.
Now, can we expect every person to be fully up to date on the product offerings at ChatGPT? Of course not. It’s not unreasonable for someone to make a statement that an LLM doesn’t get it’s data from the Internet in realtime, because in general, they are a fixed data blob. The real crux of the matter is people understanding of what LLMs are, and whether their answers can be trusted. We continue to see examples daily of people doing really stupid stuff because they accepted an answer from chatgpt or a similar service as fact. Maybe it does have a tiny disclaimer warning against that. But then the actual marketing of these things always makes them seem far more capable than they really are, and the LLM itself can often speak in a confident manner, which can fool a lot of people if they don’t have a deep understanding of the technology and how it works.
Do you think that human communication is more than statistical transformation of input to output?
Not in “AI” itself? 😭
I guess I shouldn’t be disappointed.
Seeing people parrot popular opinions on AI for brownie points among their peers just reinforces my idea that most people on the internet are useful idiots these days.
They’re not meant to be taken seriously. Disagreeing with them should usually be an indicator that we’re doing something right because of how often they’re wrong without ever owning up to it.
while you were parroting AI sheep I was studying the blade ah comment