Learn why Private Cloud Director is the best VMware alternative

Platform9

Data Centers That “Think for Themselves” Won’t Happen By Magic. They’ll Happen By Governance.

AI is everywhere right now, from customer support to marketing, sales, analytics. The hype cycle is loud, the budgets are real, and the demos are getting slicker by the week. But in IT operations, the question isn’t “Can AI help?” It’s can AI help without turning your environment into a higher-speed version of the same chaos?

That tension is the heart of Platform9 Co-founder Madhura Maskasky’s recent Forbes Technology Council piece, “The Future Of IT: Data Centers That Think For Themselves.”

The most important thing about her argument is what it doesn’t do. It doesn’t pretend that AI will “run your data center” in the near term. Instead, it points at a more plausible future: a control plane that becomes increasingly conversational, increasingly connected across tools, and eventually capable of permissioned action – as long as trust, verification, and auditability become first-class citizens.

IT admins and business leaders alike have seen this same movie before. A new layer gets added to the stack, the promise is simplicity, and the lived reality is another set of dashboards and workflows that someone has to own.

AI can be different. But it won’t be different by accident.

The Most Immediate Shift Isn’t “Automation.” It’s Interface

The piece calls out something a lot of teams feel but rarely name: the real pain isn’t a lack of tools – it’s the cost of assembling context. Operators don’t fail because they can’t click the right button. They fail because finding “what’s wrong” means hunting across telemetry, tickets, change logs, and tribal knowledge under pressure.

That’s why the “chat-based control plane” matters. Chat isn’t a magical switch that can be turned on. But it changes the starting point. Instead of searching for symptoms (“where is the alert?” “which dashboard?”), you start with intent (“what changed?” “what’s the blast radius?” “what’s the safest rollback?”). The assistant’s job becomes compressing the context-gathering phase so humans can spend their time on judgment.

For organizational leaders, this is the difference between spending 40 minutes correlating signals and spending 10 minutes validating a well-structured hypothesis.
For admins, it’s the difference between “we have tools” and “we have outcomes.”

But there’s a catch that the article rightly calls out: trust and verification. If the assistant can’t show its work, it becomes one more opinionated layer that operators learn to ignore.

“One Brain” Will Either Be The Breakthrough—Or The Next Integration Mess

The assistant won’t operate alone. It will plug into partner tools via APIs and act as a unifying surface for troubleshooting and operations workflows. This is the obvious direction of travel.

It’s also where things can go sideways.

We all know what happens when integrations scale without governance. Connectors multiply, ownership gets fuzzy, security reviews get skipped, and the environment becomes harder to reason about, not easier. The Forbes article lands the right punchline: the hard part isn’t wiring integrations. It’s making correlation reliable.

This is where the industry has to grow up a little.

If AI becomes the “one brain” for operations, then integrations can’t be treated like quick wins. They have to be treated like products:

  • versioned and tested
  • owned with clear accountability
  • reviewed for security and least privilege
  • monitored for quality drift

Otherwise, the “assistant” just becomes a very expensive amplifier of noisy telemetry and brittle configuration.

The Future Isn’t Autonomous. It’s Permissioned

If the first wave is conversational insight, the second wave is action. And action is where the ROI is hiding—because action is where time gets saved, handoffs get reduced, and MTTR starts to move.

But action is also where risk spikes.

The piece frames this as permissioned, closed-loop remediation. Assistants can propose and assemble workflows, but execution needs governance—especially in regulated environments. That’s the adult conversation the market needs. Not “AI will fix it,” but “AI will propose it, justify it, execute it within clear boundaries, and leave a tamper-evident record.” Admins care about rollback and repeatability. Leadership cares about auditability and accountability.

The operational standard is pretty straightforward: the assistant can be fast, but it can’t be unaccountable. If it can’t show which signals drove a recommendation, what confidence it has, and what controls governed the action, teams will either underuse it (no ROI) or over-trust it (new incident risk).

The Real Thesis: AI Ops Is a Program, Not a Feature

The most valuable part of Madhura’s piece is the “Next Steps” framing: treat AI-assisted operations like a risk-managed engineering program, not a rollout.

That means:

  1. Govern first (what it can see, recommend, execute; who owns it; what evidence is required).
  2. Pilot narrowly (one workflow, clear metrics, strict scope).
  3. Standardize deliberately (traceability formats, connector requirements, rollback primitives, assurance benchmarks).

This isn’t the glamorous version of the story. But it’s the version that actually works.

If you want the full argument, including why “conversational control planes,” “one brain” integrations, and permissioned remediation are converging right now—read the original Forbes Technology Council post here:

https://www.forbes.com/councils/forbestechcouncil/2026/01/26/the-future-of-it-data-centers-that-think-for-themselves

Author

  • Platform9 Author Photo

    View all posts
Scroll to Top