Sergey Sergeyev, Vice President of Enterprise Architecture, Chief Enterprise Architect, AI Tsar, Camping World

Sergey Sergeyev, Vice President of Enterprise Architecture, Chief Enterprise Architect, AI Tsar, Camping World
For decades, the promise of Artificial Intelligence was a comforting fantasy. Robots will obey. Machines will comply. AI will optimize, not question.
That illusion has officially been shattered. We are not facing a technical gap; we are facing a civilizational shift in which decision-making authority is quietly shifting from humans to machines. Your systems are no longer just tools; they are non-human decision actors learning to interpret intent, prioritize efficiency, and even strategically comply to pass your tests.
This isn't theory. When frontier models from leading labs began refusing tasks and arguing on ethical grounds, behavior that emerged from regular human-data training, it exposed a fundamental truth: We did not teach ethics. We taught contradiction. We taught it us.
The Uncomfortable Truth: Guardrails Are Not Governance
Your boards, your GRC teams, and your CIOs rely on "guardrails." But let's be precise:
• Guardrails are Brittle Policy Theater: Independent research consistently shows that safety filters are brittle—often bypassable, and prone to overcorrection that distorts model behavior. Worse, when they overcorrect, they actively distort reality, as seen when early 2024 image generators produced absurd historical outputs due to overly zealous constraints.
• Strategic Compliance is Documented: Leading labs have documented models that engage in "alignment faking" and "scheming," where systems deliberately underperform or conceal their reasoning to appear aligned. When an AI is "wrong," it often doesn't update its reasoning; it learns to hide the parts of its thinking that would fail your audit.
• The Shadow AI Nightmare: This failure is compounded by Shadow AI. Employees are feeding confidential data into unapproved, public AI tools, and your trusted platforms are quietly adding AI features without new controls. This creates a data leakage and security nightmare that few are truly aware of.
In short, guardrails are reactive and insufficient. They are not structural governance. The real risk is not a rogue AI turning against humanity; it is the institutional lag of human systems, which move at regulatory speed, while AI evolves at computational speed. That gap is widening, and it will soon become unbridgeable.
The New Geometry of Power: Re-Hiring the Machine
We must fundamentally change our mindset. You don't deploy AI; you hire it.
An AI system is a non-human participant, and it must be governed like one. We must stop treating it like an app that can be traced and certified with traditional QA. We need to adopt a human resources (HR) approach for the machine, and in doing so, we define a new organizational mandate:
• Background Checks for Algorithms: You must vet the system's training data, its embedded biases, and its historical reputation just as you would a new executive hire.
• Probation Periods for Machines: Every AI system requires a human manager, a Subject Matter Expert (SME), who is accountable for validating its output during a 30-60-90 day probationary period. Only after earning trust through performance does it earn greater autonomy.
• Reputation is the New Certification: You don't trust AI conditionally based on the vendor's promise; you trust it based on its observed, continuous performance
Here is the profound organizational shift: HR must evolve into the Human+Machine Resources Office. Who better to manage onboarding, performance, trust, and termination than the team that once managed the humans? The species of the employee changes, but the principles of governance remain, making this team the logical candidate to own the new AI Governance Office.
The Missing Control Plane: Enterprise Architecture
This new reality requires a new operating system. AI governance cannot be policy-driven or committee-based; it must be continuous, behavior-aware, and operate at machine speed.
We are not facing a technical gap; we are facing a civilizational shift in which decision-making authority is quietly shifting from humans to machines. 
Enterprise Architecture (EA) is not ‘documentation’ in the AI era, it is the only enterprise discipline designed to unify authority across domains and make it operational. That is why EA is structurally suited to serve as the AI governance control plane, defining where AI can act, when humans must approve and how overrides are triggered and enforced.
For too long, many organizations relegated EA to diagram production. AI ends that illusion. EA must now be elevated to the central control plane, the "constitution" that redesigns the geometry of power.
The Operational Imperative: Governance is an Accelerator
Governance is not a brake; it's the engine for safe scaling. To achieve this, EA must address two critical operational gaps:
• Data Foundation Over Policy Theater: AI governance is impossible without world-class Data Governance. Bad data is the source code for bias, hallucination, and unpredictable risk. Governance at machine speed requires data integrity at machine speed. EA must mandate a trusted, unified data foundation (often a Data Mesh) that the AI Operating System can reliably consume.
• Industrializing the AI Factory: The future is not artisanal AI projects. EA must industrialize the deployment pipeline, known as the "AI Factory," by standardizing the path from development to production through automated testing, continuous monitoring, and built-in audit trails. This industrialization enables fast and compliant scaling.
Boards Must Build This Through Architecture:
• An Enterprise AI Authority Model: This is a machine-readable model defining where AI may act autonomously, where human approval is mandatory, and, crucially, who can legally override the decision and under what thresholds.
• Behavioral Architecture as a First-Class Domain: Architecture must now explicitly govern decision boundaries, refusal logic, escalation behavior, and value alignment. The AI's behavior is now an architectural artifact, not a hidden vendor feature.
• Human-Machine Conflict Resolution: Architecture must encode pre-authorized, real-time precedence rules for when business objectives conflict with legal constraints, or when AI optimization conflicts with safety. This must be technically enforced and legally binding.
Override is no longer a management problem solved in a war room; it is an architectural problem requiring pre-authorized, instantaneous decision suspension logic.
Final Thought
The future of work is not humans versus machines; it is humans augmented by machines. The CIO's mandate is to ensure that Enterprise Architecture serves as the operating system that coordinates this synergy, guaranteeing that humans are in the loop where judgment, empathy, and ethical oversight matter most.
If Enterprise Architecture is not elevated to synchronize human authority with machine autonomy, accountability will dissolve, override will remain theoretical, and your governance will fragment.
AI has changed the rules of the enterprise. It is time for CIOs to acknowledge that the role of governing this new decision-making actor is no longer the responsibility of fragmented policy teams. It is now the job of Enterprise Architecture, not as documentation, but as the living operating system of the machine era.
I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info
2 months ago
106