The kooky world of AI

The kooky world of AI
AI has a particular talent for being both impressive and ridiculous in the same sentence.
One moment it is helping you untangle a gnarly bug. The next moment it is confidently inventing a library that does not exist.
I am choosing to enjoy the weirdness while still treating it like a power tool.
AI is not magic. It is a compressor
A decent mental model for me is: AI compresses patterns from a lot of human writing and code, then predicts what should come next.
That can be incredibly useful.
It can also produce answers that sound right and are still wrong.
So I try to keep one rule in mind:
If it matters, verify it.
The best prompts are basically good briefs
When AI output is bad, my first instinct used to be “the model is bad.”
Now I usually assume my prompt is vague.
What helps:
- A clear goal (what “done” looks like)
- Constraints (stack, performance, accessibility, tone)
- Inputs (existing code, a link, the exact error)
- A request for tradeoffs (options, pros/cons)
That is just good communication. AI forces the issue.
It is a mirror for your own thinking
When I ask an AI to explain something and it gives me mush, it usually means I do not understand the thing as well as I thought.
That is not a failure. It is a clue.
Sometimes the most valuable result is not the answer, but the realization that I need to refine the question.
The weirdest part: it can be fun
There is a playful side to AI that I do not want to lose.
Sometimes I use it like a creative partner:
- “Give me 10 silly metaphors for state management.”
- “Rewrite this paragraph like a seaside postcard.”
- “Invent a fake town in Mayo and describe its annual festival.”
Half of it is nonsense. The other half shakes something loose.
My current stance
- Use it to accelerate drafts, debugging, and exploration.
- Keep responsibility on the human side of the keyboard.
- Treat output as a starting point, not a verdict.
If you are using AI in your work, what is your most reliable use case so far?