Skip to main content

AI-Assisted Content: Why Transparency Matters

November 16, 2025

An honest discussion about using AI assistance in professional work, the distinction between AI-generated and AI-assisted content, and why transparency matters.

AI-Assisted Content: Why Transparency Matters

I recently added a transparency section to this site’s about page disclosing that it was built with significant AI assistance. This post explains why I believe that matters, how the collaboration actually works, and where the line between human expertise and AI implementation really sits.

The Elephant in the Room

A lot of content online is being created with AI assistance right now, and most of it isn’t disclosed. Blog posts, documentation, code, websites - AI tools are everywhere in the modern workflow. The question isn’t whether professionals are using these tools (many are), it’s whether we’re being honest about it.

I chose to disclose it. Here’s why.

AI-Generated vs AI-Assisted: A Critical Distinction

There’s a massive difference between content that is AI-generated and content that is AI-assisted.

AI-generated content is what you get when someone with no expertise prompts an AI to write about a topic they don’t understand. It’s the “write me a blog post about Kubernetes security” approach, where the prompter couldn’t tell you if the output is accurate. The AI is doing both the thinking and the implementation, and the result is often plausible-sounding but shallow or wrong.

AI-assisted content is fundamentally different. The human has the expertise, has done the work, knows the domain deeply - and uses AI to help articulate and present that knowledge. When I write about implementing security controls in an MSP environment, I’m drawing from years of actually doing that work. The AI didn’t learn about MSP security by reading the internet - I learned it by managing production systems, responding to incidents, and making decisions with real consequences.

This website falls squarely in the AI-assisted category. The technical knowledge, the scripts, the security experience - that’s all mine from 7 years in IT. What the AI did was help structure the presentation, write clear explanations, and implement the technical details of the site itself.

How the Collaboration Actually Works

The reality of working with AI is less magical than it might seem. It’s iterative, requires constant validation, and demands that you actually know what you’re talking about.

When I write technical documentation, I start from real implementations I’ve done. Take the recent post about AWS WAF configurations - that came from actually setting up WAF rules to block AI scrapers on this site. I know what works, what doesn’t, what the gotchas are, because I debugged it in production. The AI helps me organize that knowledge into clear sections and writes clean prose. But I’m the one who catches when it suggests a configuration that would break in edge cases, or when it glosses over critical details.

Code development follows a similar pattern, but with an important caveat - I’ve been coding since I was a kid. I’m not someone who vaguely describes what they want and hopes the AI gets it right. I can analyze code complexity using tools like ruff, identify where implementations can be improved, understand trade-offs between different approaches, and write the code myself if needed. The AI accelerates the process, but I’m reviewing it with the same critical eye I’d apply to code from any developer.

Here’s a concrete example: when building the browser-based tools on this site, I specified that they needed to run entirely client-side with no data leaving the browser. The AI would occasionally suggest implementations that made API calls or stored data in ways that violated that requirement. I caught those immediately because I understand both the technical implementation and the privacy implications. Someone without that background might have shipped code that claimed to be private but wasn’t.

If there’s a bug in one of my scripts, a technical error in my documentation, or bad security advice on this site - that’s on me. I reviewed it, I approved it, I’m responsible.

Where AI Adds Value and Where It Falls Short

After building this entire site with AI assistance, I’ve developed a clear sense of where the technology helps and where it fails.

Where AI excels:

  • Structuring long technical documents into logical sections
  • Writing grammatically correct, flowing prose
  • Implementing code from well-defined specifications
  • Generating boilerplate and maintaining consistency
  • Quick iteration on designs and layouts

Where AI completely fails:

  • Validating if security implementations will work in real environments
  • Understanding organizational context and real-world constraints
  • Knowing which problems are worth solving vs academic exercises
  • Drawing from experience with how systems actually behave in production
  • Making judgment calls with real consequences

I’ve seen AI confidently suggest configurations that would cause outages, recommend security practices that sound good but don’t address actual threats, and provide technically correct answers to the wrong question. Every single time, the failure came down to lack of real-world context and experience - things that only a human who’s done the work can provide.

Here’s the key insight: AI can accelerate 80% of the mechanical work, but 100% of the expertise, judgment, and validation must come from the human. Remove the human expertise, and you get plausible-sounding garbage that will fail in production. Remove the AI assistance, and you get slower but equally valid work.

Ethical Considerations

Using AI assistance raises legitimate questions about authenticity and credibility. If AI wrote the words, is the content authentic? Does AI assistance undermine professional credibility?

My position is that authenticity comes from whether the content accurately represents real knowledge and experience, not from who typed the characters. A doctor using AI to draft patient education materials is still providing medical expertise - the value is in the medical knowledge and judgment, not the mechanical writing.

The credibility question only matters if the human lacks real expertise or if the AI usage is undisclosed. If I claimed to have manually typed every word on this site while actually using AI, that would be dishonest. If I used AI to write about topics I don’t actually understand, that would undermine credibility. But using AI to help present expertise I genuinely have? That’s just tooling.

There’s a question of responsibility that’s not debatable: if something is wrong, it’s always the human’s fault. AI assistance doesn’t absolve you of that responsibility. If there’s bad advice on this site, don’t blame the AI - blame me for not catching it.

The transparency question is where I think we have a real obligation. Readers deserve to know how content was created. It allows them to apply appropriate scrutiny, sets realistic expectations, and builds trust. More importantly, as AI becomes ubiquitous in professional workflows, we need honest precedents. If everyone uses AI but pretends they don’t, we create a dishonest standard.

Why I Chose to Disclose

Could I have not disclosed the AI assistance? Absolutely. Would anyone have known? Probably not. The expertise is real, the content is accurate, the projects are mine.

But that’s not the standard I want to set. I added the transparency section because readers deserve to know, because it normalizes honest AI usage, and because I’m confident enough in the actual expertise to be transparent about the process.

If the value of this site is real - if the scripts actually solve problems, if the documentation reflects genuine experience, if the tools are useful - then disclosing AI assistance doesn’t diminish that value. It just provides context about how it was created.

I’d rather contribute to a culture where professionals are honest about using powerful tools than participate in a collective pretense where everyone uses AI but nobody admits it.

The Bottom Line

AI assistance in professional work is here to stay. The question is whether we normalize honesty about it or create a culture where everyone uses it but pretends they don’t.

This website represents my knowledge, my projects, my experience - implemented with AI assistance. The transparency section makes that clear. I think that’s the ethical approach.

The technology is powerful. How we choose to use it, and whether we’re honest about it, is up to us. I’m choosing transparency.

If you’re considering using AI assistance in professional work:

Do:

  • Use AI to accelerate implementation of expertise you actually have
  • Maintain strict quality control and validation
  • Be transparent about your process
  • Take full responsibility for all output
  • Ensure you could do the work without AI, just slower

Don’t:

  • Use AI to fake expertise you don’t have
  • Publish AI output without thorough review
  • Claim AI-assisted work as entirely manual effort
  • Sacrifice accuracy for speed
  • Rely on AI for judgment calls requiring real experience

What are your thoughts on AI transparency in professional content? I’m interested in hearing different perspectives. Reach out at kevin@glyph.sh.