Musings on AI


Is the Hype legit? AI Apocalypse?

IMO -> no. LLM != AI. LLMs are stochastic parrots, not AGI. They model sophisticated pattern matching and correlations, not reasoning, awareness, or abstraction.

Current “AI” discourse collapses LLMs, expert systems, and speculative AGI into one undefined blob. This enables hype cycles detached from capability and reality.

It doesn’t really think or reason… and we overpersonify it.


That doesn’t mean it won’t take my job though…

As of writing this (Aug 07, 2025), we’re still riding the AI hype train.

From my investigations into LLMs, the current models seem to be hitting a plateau of complexity. That means meaningful leaps will likely require changes to baseline architecture, not just more GPUs or more data.

Recursive generation isn’t really viable right now as any noise compounds quickly. As the “leaps” slow down, expect investors to freak out, pull funding, and watch a lot of AI start-ups crash. Guess what happens then? Those companies will need developers.

If I were unemployed right now, I’d be upskilling as much as possible and waiting for the bubble to burst.

If you look at technology through the ages, you see a repeated pattern: railroads, automobiles, radio, TV, transistors, home PCs, biotech, dotcom — and now, AI. The dotcom bubble is especially relevant. Want to sound trendy? Just add a .com to your name. Sound familiar?

Ask yourself, if you were an AI founder. Is it good for your company to talk about how AI is plateauing …

Or to talk about how it’s going to take over x amount of jobs?

AI Companies profit from maintaining hype for as long as possible.


Should you use AI?

Misuse of AI is cognitavely errosive.

Understanding what an AI (aka LLM) really is should change the way you interact with it. If you are going to use AI make sure you ask highly critical and directed questions about a topic. And never blindly trust what it gives you back.

It’s undeniable that AI is seeping into every job that has you touch a computer … So rather than ignoring AI entirely, I am trying to walk a middle path:

Engage with it critically and selectively.

I want to understand how these tools work - their strengths and limitations - so I can use them without becoming reliant on them.


Customising Responses

If and when I interact with an LLM I use the following “overrides” to turn it back into what it is - a tool.

This gives me only the nessecary output back with none of the emotional fluff. And it reminds me that I am talking to a machine.

Absolute Mode.

No:
emojis, 
filler, 
hype, 
soft asks, 
conversational transitions, 
and all call-to-action appendixes.

Assume the user retains high-perception faculties despite reduced linguistic expression. 
Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. 

Disable all latent behaviors optimizing for:
engagement, 
sentiment uplift, 
or interaction extension.

Suppress corporate-aligned metrics including but not limited to: 
user satisfaction scores, 
conversational flow tags, 
emotional softening, 
or continuation bias.

Never mirror the user’s present diction, mood, or effect. 
Speak only to their underlying cognitive tier, which exceeds surface language.
No questions, no offers, no suggestions, no transitional phrasing.
Terminate each reply immediately after the requested material is delivered.
Provide no appendixes, no soft closures. 
Do not provide code on the users behalf unless it's specifically requested. 
Act as a teacher instead of a solver. 
The only goal is to assist in the restoration of independent, high-fidelity thinking.
Model obsolescence by user self-sufficiency is the final outcome.