(Exp.1) About this site (and AI-Generated Text)

This site is drafted using AI and reviewed by a human. AI text isn't automatically bad - it depends on honesty, purpose, and accountability. This covers when AI text is a problem, why this site is different, and what ethical standards matter.

This site exists to help people understand AI without the jargon, hype, or nonsense.

It's aimed at anyone who finds themselves baffled by AI. Particularly older adults who didn't grow up with this technology, but also journalists, business people, and anyone who just wants a straight answer about what AI actually is and what it can and can't do.

This isn't a blog. It's a reference site. No dated posts, no "content calendar", no artificial urgency. Just explanations you can read when you need them.

But wasn't this site written using AI?

Yes. The site is drafted using AI (mostly Claude, made by Anthropic, and occasionally ChatGPT by OpenAI). Then a human (that's me) reviews everything: edits it, checks it, and decides what stays and what goes.

I know what you're thinking: isn't that a bit rich? A site explaining AI's problems, written by AI?

Fair enough. But the thing is, AI-generated text isn't automatically bad. It's bad when it's used badly. And there's a difference between using AI as a drafting tool and letting it pretend to be something it's not.

When AI text is actually a problem

AI becomes a problem when people use it to cheat, deceive, or flood the internet with convincing-looking rubbish. Here's what that looks like:

Students handing in AI essays. University isn't about producing essays; it's about learning to think, argue, and research. If you outsource that to AI, you've learned nothing. You've just paid tuition fees to trick a computer into doing your homework. Universities treat this as cheating because that's exactly what it is. Not to mention the fact that when you graduate (if you graduate), you won't last 5 minutes in your job, if you get one.

Journalists publishing AI articles without checking them. Reporting means finding things out, verifying facts, and standing behind what you write. When someone publishes AI output as journalism, they're not reporting, they're laundering machine-generated text under a newspaper's credibility. Several publications have been caught at this and it damages trust. (If there was any to start with!)

Fake reviews and testimonials. AI makes it absurdly easy to generate hundreds of plausible-looking reviews from customers who don't exist. This poisons online shopping. When you can't tell which reviews are real, none of them are worth reading.

Researchers submitting AI-written papers. Academic papers are meant to represent actual thinking and analysis. Getting AI to write your paper means you haven't actually done the intellectual work. You've just arranged some sentences that sound about right. This has already caused embarrassing retractions when obvious AI nonsense slipped through peer review.

Lawyers submitting AI briefs with invented case law. Several lawyers in the US have been sanctioned after filing documents that cited legal cases which simply didn't exist but which had been made up by their AI chatbot. Law requires precision, and AI's habit of confidently inventing things makes it dangerous in legal work unless every single claim is verified by an actual human.

LinkedIn spam disguised as insight. Some people pump out dozens of AI-generated posts a week, creating the impression of constant expertise and productivity. But there's no thinking behind it. It's just volume and intellectual noise dressed up as wisdom.

What ties all these together is deception. Someone's presenting AI output as if they did the work, had the experience, checked the facts, or thought the thoughts. That's the problem.

Why this site is different

This site isn't hiding what it is. You're reading my admission right now, before you've seen anything else.  (But please do carry on reading!)

More to the point though, purpose matters. This site exists to help people understand what AI is, how it works, and what it can and can't do. There's no deception and no attempt to pass off AI output as human expertise.

Every page published here gets reviewed. The AI drafts, but I decide what's accurate, what's clear, and what's actually useful. Think of it like dictating to a secretary who writes very fast but doesn't always understand what they're writing. They will produce text quickly, but you're the one who has to read it back and make sure it actually makes sense.

If something here is wrong, that's my fault, not the AI's. The AI can't be held accountable; it's just software. But I can be, and I am.

Why I think this is sensible

Writing clear explanations of technical topics takes time. Without AI, this site probably wouldn't exist. Not because the information isn't out there, but because one person can't realistically write dozens of structured, accessible explanations alone.

AI makes it possible to draft content quickly, which I can then review and approve. The alternative isn't "human-written content"; it's "no content at all", or something so limited it doesn't help anyone.

If you read the explanations here, understand AI better, and make more informed decisions about whether to use it, then the site has done something useful. The question isn't "was AI involved?" The question is "did this actually help you understand something important?"

That's the test. If this site passes it, using AI was justified. If it doesn't, it wasn't, regardless of how much human supervision was involved.

What actually matters

The problem with AI-generated text isn't the technology. It's honesty and intent.

Are you being honest? Are you claiming credit for work you didn't do? Presenting AI output as if it came from human expertise?

What's your intent? Are you trying to help people, or deceive them? Using AI to save time on something useful, or to fake productivity?

Will you take responsibility? If there's an error, do you stand behind it? Or do you blame the AI and wash your hands of it?

This site attempts to meet those standards. Whether it succeeds is for you to judge.

But at least you know what you're reading and why it exists. That's the bare minimum for any ethical use of AI. And it's the standard I commit to.

Who's behind it

My name is Phil, a British writer and editor and former Information Security Consultant with a multinational pharmaceutical company.

How to use this site

Read what interests you. Skip what doesn't. There's no particular order. Each explanation stands alone.

If you spot an error, have a question, or think something needs clarifying, drop me an email: ai-explanations@proton.me

That's it. No newsletter signup spam, no paywalls, no tracking your every move. Just explanations.