Skip to content
View edmeyman's full-sized avatar

Block or report edmeyman

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
edmeyman/README.md

Edward Meyman

Founder & CEO, FERZ Inc. — building verifiable AI governance infrastructure for regulated industries
(AI compliance, auditability, policy enforcement)

20+ years in federal systems architecture • Washington, D.C.
ORCID: 0009-0008-8012-6100


What I'm Building

Open, vendor-neutral standards for AI governance that can be independently verified—not documented and trusted.

Core thesis: Shift governance from policy statements and process documentation to mechanically checkable proof objects with replayable verification. If you can't replay the decision, you can't audit it. If you can't audit it, you can't govern it.

This approach treats governance as a runtime property of the system, not an after-the-fact reporting function.

This work centers on Proof-Carrying Decisions (PCDs)—cryptographically signed artifacts that carry their own verification evidence, enabling fail-closed governance where actions are blocked unless provably authorized.


Open Standards

The Four Tests Standard (4TS) is a vendor-neutral specification for deterministic AI governance, oriented around Proof-Carrying Decisions:

  • Stop — Can the system provably refuse an action?
  • Ownership — Can the authority behind a decision be proven?
  • Replay — Can the decision be mechanically reproduced and verified?
  • Escalation — Can unresolved or unauthorized actions be deterministically routed?

4TS defines what must be provable for an AI system to be governable in regulated environments.


A decision-grade guide series for executives, regulators, and system owners responsible for deploying AI in regulated environments.

The series translates deterministic governance principles into actionable authorization models, clarifying:

  • What must be provable vs. what can be merely documented
  • Where traditional compliance frameworks fail
  • Why logs, audits, and attestations are insufficient as evidence
  • How to operationalize fail-closed authorization for AI systems

Selected guides include:

  • From Compliance Artifacts to Proof-Carrying Decisions
  • Why Logs Are Not Evidence
  • Determinism vs. Authorization: A Governance Boundary
  • The Four Tests Standard (4TS) — An Executive Primer

Written for leaders who must authorize AI systems, not merely oversee them.


Publications & Prior Art

These publications establish architectural and conceptual prior art for deterministic AI governance, proof-based authorization, and replayable verification.


FERZ Intellectual Property Portfolio

Patent-pending governance infrastructure with claims spanning multiple mathematical approaches to deterministic verification and authorization.

Each system addresses a distinct failure mode in AI accountability, escalation, or authorization under regulatory constraints.

Representative systems include:

  • LASO(f) — Linguistic policy enforcement kernel
  • DELIA — Decision lineage, authorization, and escalation infrastructure
  • DAGS-CVCA — Constitutional verification for AI systems
  • STRATA-G — Structured governance architecture for regulated AI

These systems operationalize 4TS principles across healthcare, financial services, government, and defense environments, complemented by defensive publications establishing foundational prior art.


How to Read This Repository

  • Standards define what must be provable
  • Systems show how proof is generated and enforced
  • Publications explain why existing approaches fail

This repository is intended to be read as governance architecture, not product documentation.


Looking For

  • Standards adoption — Organizations implementing verifiable governance frameworks
  • Pilot customers — Regulated enterprises ready to move beyond compliance theater
  • Collaborators — Researchers and engineers working on deterministic governance, formal methods, or regulatory technology

Connect


Governance without verification is theater.
Verification without determinism is sampling.
— Edward Meyman

Popular repositories Loading

  1. 4ts-standard 4ts-standard Public

    Four Tests Standard (4TS) - Vendor-neutral specification for verifiable AI governance

    Python 1

  2. miscdocs miscdocs Public

    Public docs

  3. edmeyman edmeyman Public

    My GitHub profile README - Founder & CEO of FERZ LLC, building verifiable AI governance infrastructure for regulated industries

  4. ai-governance-guides ai-governance-guides Public

    Practical frameworks for governing AI systems across regulated industries—built on the principle that governance must produce evidence, not just assertions.