Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API

From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "death of hypercopium imminent"
-
There was this question I came up with that was very good at inducing hallucinations on what at the time I thought was a *lobotomized* LLM.
I can't recall the exact wording right now, but in essence you asked it to perform OpenGL batched draw calls in straight x86_64 assembly. It would begin writing seemingly correct code, quickly run out of registers, and then immediately start making up register names instead of moving data to memory.
You may say: big deal, it has nowhere to pull from to answer such an arcane fucking riddle, so of course it's going to bullshit you. That's not the point. The point is it cannot realize that it's running out of registers, and more importantly, that it makes up a multitude of register names which _will_ degrade the context due to the introduction of absolute fabrications, leading to the error propagating further even if you clearly point out the obvious mistake.
Basically, my thought process went as follows: if it breaks at something fundamental, then it __will__ most certainly break in every other situation, in either subtle or overt ways.
Which begged the question: is it a trait of _this_ model in particular, or is it applicable to LLMs in general?
I felt I was on to something, but I couldn't be sure because, again, I was under the impression that the model on which I tested this was too old and stupid so as to consider these results significant proof of anything; AI is certainly not my field, so I had to entertain the idea that I could be wrong, albeit I did so begrudgingly -- for obvious reasons, I want at least "plausible based on my observations" rather than just "I can feel it in my balls".
So, as time went on, I made similar tests on other models whenever I got a chance to do so, and full disclosure, I spent no money on this so you may utilize that fact in your doomed attempt to disprove me lmao. Anyway, it's been a long enough while, I think, and I have a feeling you folks can guess the final answer already:
(**SLIGHTLY OMINOUS DRUM ROLL**)
The "lobotomy" in question was merely a low cap on context tokens (~4000), which I never went over in the first place; newer/"more advanced" models don't fare any better, and I have been _very_ lenient in what I consider a passable answer.
So that's that, is what I'm starting to think: I was right all along, and went through the burdensome hurdle of sincerely questioning the immaculate intuition of my balls entirely for naught -- learn from this mistake and never question your own mystical seniority. Just kidding, but not really.
The problem with the force of belief is it can work both ways, by which I mean, belief that I could be wrong is the reason I bothered looking further into it, whereas belief to the contrary very much compels me to dismiss doubt entirely. I don't need that, I need certainty, dammit. And though I cannot in good faith say that I am _certain_, "sufficiently convinced" will have to do for the time being.
TL;DR I don't know but the more I see it just seems shittier.9