Post

Large Language Models mistakenly known as AI

Let’s chat about “AI”. It’s not. It’s not even close. It’s a very very very good algorithm that people have been calling “AI”. Large Language Models are the next step in prediction modeling. They cannot initiate a conversation on their own, nor can they generate new ideas of their own. Everything they can respond with is just based on the massive amount of often stolen data that has been pumped into them. And they have to be prompted to respond.

Which is why I hate it when people refer to it as “AI”. They’ve turned it into a buzzword and have created false expectations on the models’ capabilities. Becoming over-reliant on responses and taking them at face value.

The other day, I got in a small argument over how to use LLMs. I prefer not to at any given time, but for some tasks it’s just easier than trying to get anything out of the userbase. Godot 4.x is an amazing FOSS game development application with a ton of documentation and lots of tutorials made by people who maybe shouldn’t make tutorials. It’s documentation is also kinda rough around the edges.

By that I mean me, a beginner or laymen, will have a hard time finding that one class or function I want because I don’t really know what I’m looking for. And here is my beef with the community as a whole: When you ask “hey, how do I do ? Can someone direct me to which class would be best?" the response is nearly always one of the following: "Read the documentation", "Learn basic programming first", "You don't know what you're doing kill yourself." Maybe I'm being a little facetious on that last one, but it's pretty close to a response I've gotten multiple times.

The thing is, I do read the documentation. But I’m not rereading all of it to find one class. That’s why I asked if someone knows where what I’m looking for is. And I do know basic programming. I’m not asking a basic programming concept, I’m asking to be pointed to where I can find what I need, if it exists, so I’m not programming something that already exists.

So here’s what I do…Begrudgingly I go to ChatGPT or Claude or any of the big ones and throw in my query. And then I remember to hit stop and tell it to be short and to the point and not code out an entire chapter for me because it WILL do that unless you specifically tell it not to. We’ll chat about that tidbit in a moment… And it will usually give me two or three options that I can then go to the documentation and read. And then copy and paste and ask for clarification. And then if I’m still not sure, fine I’ll try the subreddit or the forums or the Discord again…at least now I have some more ammunition.

The argument was “how do I know it’s accurate if I didn’t know what was accurate to begin with.” And I, in my wisdom of the ages, said “I cross-reference.”

Tangent time. Whoever I was arguing with…your teachers failed you. It’s not your fault. Or maybe it is. Maybe you didn’t pay attention in Freshmen English when researching was taught. Cross-referencing is what you do to better understand and verify the data you have on hand. If multiple sources agree, you can assume the data is correct. These are known as “sources”. Once upon a time, to pass a highschool research paper, you had to present at least three sources that all verified the same claim. I dunno how it is today, but holy shit did this guy not get that. “Yea but how can you be sure?” I can’t. Ever. Just like I can’t be absolutely certain I’m going to wake up in the morning tomorrow. All data I have suggests I will, but I can’t be certain, now can I? I used to laugh at coworkers who claimed kids were getting dumber. Now I’m sadly agreeing. And I don’t know if it’s their fault or the teachers anymore.

Back to LLMs as a whole. The overreliance is killing us. My employer has stopped taking his own notes during meetings, relying on an AI generated transcript. And my GOD does it get a lot wrong. Wrong people talking, wrong summary, wrong subject. But he refuses to go back to the ‘old way’ because ‘this is faster we just have to fix the transcript a bit.’ Basically have a whole other meeting to fix the minutes taken by the LLM for the meeting. Over-reliance and time wasted.

But I find it hard to blame him, because companies are shoving this tech into everything and they are refusing to add settings to disable it. Microsoft’s absolute need to shove CoPilot down our throats means I have been avoiding using Office as much as I can. There’s no way for me to disable it. Even going into my registry and adding flags to purposefully prevent it from loading doesn’t work. It’s integrated too deeply. And it thinks it knows what I want to write or calculate more than me. And it’s a fight to keep control from it. Google and DuckDuckGo searches prominently show their AI results first and foremost above the sites I actually want. There is an AI company for everything now. Lawyers, in spite of being told not to, use them to find case precedents. And then they do not verify them as I mentioned doing above.

And it’s terrifying because we know they make shit up. They are a prediction model that is programed to please the user to fire off those neurons that release serotonin and get you that short lived high. It responds with what it is programmed to assume you want to hear, not what the best option from it’s pile of data is.

As a test, I threw in a question about Palladium Rifts. An old TTRPG that I thought I might pick up again. I’m writing a short adventure that I may try to pull a group together to do. I asked ChatGPT where the nearest settlement, in the Rifts literature, was to a location in Canada I was thinking I’d set the campaign in. It came back with Thunder Bay. This is a real city in Ontario. I’m familiar with it. I also know that in the Rifts setting, it’s mentioned on exactly one map in one World Book. I know the exact page it exists on, in fact.

I asked it to provide me the World Book, section, and page number that this game from. It came back with the World Book for Canada (wrong), a section that comes from the New West World Book, and a page number that exists in neither. Claude provided similar results, though it had a different section name and page. It still insisted that the Canada World Book was correct.

Here’s the part that really frustrates me. I tell it, “No, that doesn’t exist there” and what does it respond with? You’d think “Ah, I made a mistake, that information is not available to me.” right? Wrong. “Oh, you caught me. I made it up. There is no actual reference to it.”

It made it up. This was for a TTRPG throwaway campaign. Now imagine some idiot student writing a research paper that they don’t bother to check sources on. Or a goddamn lawyer defending an innocent man on trial for murder. It made it the fuck up.

That alone should be enough grounds to prevent LLM from being used in any professional setting. And yet we HAVE examples of lawyers being caught using ChatGPT to write their cases, and it making up precedent that never happened, and THEY STILL DO IT. That lawyer should have been disbarred! I think he still practices law today!

LLM or AI or shitforbrains whatever you want to call it is being integrated into every part of our lives. Everyone is rushing to integrate it into their software and it’s just making everything dumber. I’ve always had a beef with the helper programs for MMOs that tell players when mechanics are about to occur, where to stand to avoid damage, etc, because at that point it’s the game playing for you instead of you figuring it out on your own. LLMs take that to a whole other level. It’s letting the computer do life for you.

And companies are using it to replace people. How many times have we seen over the years the layoffs that occur when new tech is introduced? Don’t need Ralph anymore, we’ve this big shiny machine that can do his work in a quarter of the time. Instead of reassigning Ralph, let’s just get rid of him. We pay him triple what a new hire would cost anyways. Fuck you Ralph, I know you’ve been with the company thirty years. Here’s a cake and a gold plated watch we picked up at Walmart. Get the fuck out of our building. Now apply that to thousands of workers. Epic Games just laid off 1000 employees. They made sure to claim it wasn’t due to AI. They’re lying. I know a number of those folks. They were told straight up that the new LLM model Epic was trying could do their work for them and they weren’t needed anymore. You won’t hear about that for a year or two thanks to the agreements they had to sign to get that six months of pay, though.

This is beyond “oh you’re just old and don’t like change.” I don’t like loss of control and I don’t like replacing people without compensation. If this was turning us into the utopia we all read about growing up maybe it wouldn’t be so bad. But it’s not. We still have bills to pay, and quickly we’re running out of jobs to work to pay them. How much longer before LLM models take the fast food jobs? Do you think a robot assembly line can’t be put into McDonald’s that cooks and assembles the food faster than the line workers based on the inputs the customer puts into the screen at the front? “Durr, robits and LLM two different things, Walker” FUCK OFF. They will be shortly. You walk up to the machine and talk to it just like you would the employee and just like ChatGPT it responds with what it has come to understand it thinks what you want and sends that to the back. We’re already seeing the touch screen replacements in the lobbies that replace the cashiers. This is just the next logical step.

Yea…I dislike LLMs for many reasons. But the main reason is they are being put into use way to quickly. What happens in five years when all the skilled workers who actually know how to think on their feet and come up with alternative solutions to problems are gone? LLM can’t do that. it just spits out what it’s programmed to keep the user happy. And it “makes it up” and pretends the answer is real instead of just saying it doesn’t know to limited data.

It’s going to crash. It’s going to crash hard. And we pleebs are the ones who will suffer. Know that terrible film Elysium? Matt Damon is one of the lower class workers who gets a lethal dose of radiation, could easily be cured by the medbeds the upperclass enjoy but because of class disparity he’s refused treatment? That’s coming. It’s already here, in fact. When the LLMs crash and burn, the upper class millionaires won’t actually be having it difficult. Money will still be traded between them while they sail around the world in their fourth yacht, and we’ll still be down in the dirt just struggling to breath.

It’s another system of control being put in place under our noses and a majority of us don’t care because we can type into ChatGPT “recommend me ten pop songs from April of 1994” and it will do just that. Or it’ll make up a few. And you won’t care, because it was instant gratification that you did not work to obtain.

This post is licensed under CC BY 4.0 by the author.