Open AI? Closed AI!

The industry learned a neat trick: say “open” like it’s a virtue, then run everything like a vault.

“Open” became branding. A vibe. A halo word.

And yes—OpenAI is the sharpest irony of the lot: a name that reads like a promise, attached (for years) to models you can’t inspect, weights you can’t run, data you can’t audit, and governance you can’t independently verify. Their own Charter language sets a soaring bar (“broadly distributed benefits,” “avoid enabling uses… that harm humanity or unduly concentrate power”).  Then reality looks like: closed by default, selectively opened when strategic, and explained as safety/competition.

They’re not unique. They’re just the cleanest example of a pattern: what they say vs what they do.

Open is what they say.

Closed is what they ship.

“We’re open” (marketing) vs “we’re open” (meaning)

In software, “open” had teeth: OSI’s Open Source Definition forbids discrimination against people or fields of endeavor and sets real criteria for what counts.

In AI, “open” got melted down into softer substitutes:

  • “Open” meaning you can use our API (that’s not open; that’s rental).
  • “Open” meaning we published a blog post (that’s a press release with citations).
  • “Open” meaning open weights (often still not open training data, not open code, not open governance).
  • “Open” meaning we’ll tell you we did safety (not the same as proving you did).

The Open Source Initiative has been blunt about “open washing” in the Llama ecosystem—weights available, but licenses and restrictions that fail open-source standards.  And OSI’s newer work around “open AI” tries to restore teeth: to be “open,” AI needs more than downloadable blobs; it needs the ingredients and process—training data, source, settings—so you can actually study and reproduce what matters.

That’s the standard the industry keeps trying to dodge.

The OpenAI case study: openness when it suits, opacity when it costs

Remember GPT-2? OpenAI explicitly said it was not releasing the dataset, training code, or model weights at first, choosing a staged release.  You can agree or disagree with the safety rationale, but the move matters historically: “open” already had footnotes.

Then GPT-4 arrives with a technical report that flat-out says: because of “the competitive landscape and the safety implications,” the report contains no further details about architecture (including model size), hardware, training compute, dataset construction, training method, or similar.  That’s not openness. That’s managed disclosure.

To their credit, OpenAI has also published system cards and risk assessments—e.g., GPT-4o’s system card rates persuasion as “medium” risk and discusses external red teaming and evaluations.  But notice the pattern: the safety story is offered, not independently enforceable. You read it. You nod. You still can’t verify the runtime constraints on any given day.

And then—August 2025—OpenAI releases gpt-oss: open-weight models under Apache 2.0, runnable locally, explicitly framed as “open-weight language models.”  That’s a real move in the right direction. It’s also revealing: when competitive pressure and ecosystem strategy align, openness suddenly becomes feasible.

So which is it?

If “open” is mission, it shouldn’t arrive only when the market forces it.

The industry’s favorite move: call it safety, keep the lock

Here’s the cynical loop the whole field runs:

  1. Announce values: openness, safety, broad benefit.
  2. Withhold what would allow real scrutiny: weights, data, full eval harnesses, enforcement mechanisms.
  3. Offer a substitute: a constitution PDF, a system card, a risk label, a pledge.
  4. Ask for trust.

I’m not saying they’re lying about caring. I’m saying the structure is wrong. Good intentions don’t scale. Promises don’t bind machines. Documents don’t enforce themselves.

A constitution inside training is still: “trust us, we raised the model right.”

A system card is still: “trust us, we measured risk.”

An “open” release that’s only weights is still: “trust us, this is open enough.”

The gap is always the same:

Can you prove the rules were applied at runtime?

What they say vs what they do — in one sentence

They say: Open.

They do: Gated.

They say: Trust.

They do: Control.

They say: Safety.

They do: Opaque enforcement.

They say: Broad benefit.

They do: Asymmetric access to the engine.

What “open” should mean again

If the industry wants to keep using the word open, it needs to pay the cost of the word.

Open should mean you can:

  • Run it (not just rent it).
  • Inspect it (weights/code/settings that matter, under terms that match open standards).
  • Audit it (inputs, governance, and safety claims that can be verified, not merely narrated).
  • Hold it accountable (tamper-evident logs, reproducible evaluations, third-party verification).

Until then, “Open AI” is just a spell they cast on you so you stop asking where the locks are.

And I’m done pretending the locks don’t matter.

Read more

Sacred Geometry: From Token to Metaverse within the Universally United Unionisation that is Totality

Definition Sacred Geometry (in our arc): the disciplined progression of universally invariant form… beginning at the smallest unit of symbolic distinction (the token) and unfolding through symmetry, reflection, discretisation, and recomposition… until it becomes metaverse-class structure inside a single coherent union (Totality). Explanation A token is not a number… it’

By Ande