At the beginning of 2025, I wrote about what I was looking forward to in generative AI: agents that act, systems that collaborate, automation that adapts, and AI moving closer to execution rather than just conversation.
Looking back now, 2025 was undeniably active — marked by frequent announcements, ambitious claims, and visible progress across models, tools, and platforms. What stayed with me most, however, was not just what advanced, but what became clearer once these ideas met real systems, real constraints, and real accountability.
This is a reflection on those learnings and how they shape what I’m looking toward in new year 2026.
What 2025 actually drew my attention to
Much of my thinking this year shifted away from raw model capability and toward system behavior in practice.
I found myself repeatedly asking:
- Where do agentic systems struggle once they leave demos?
- How do probabilistic systems coexist with deterministic business logic?
- What makes agent-driven automation fragile, and what makes it reliable?
- Why do identity, agentic user interfaces, standards, and contracts suddenly matter more than prompts?
Instead of “Can AI do this?”, the more useful question became:
“Under what conditions should AI be allowed to do this?”
What 2025 actually drew my attention to
1. Hallucination remains a real constraint in high-stakes systems
Despite steady improvements in reasoning and context handling, hallucination remains part of model behavior – often in subtler, harder to detect forms.
In domains like finance, tax, and compliance, where precision and traceability matter, AI works best as an assistive or decision-support layer paired with deterministic validation. Reliability, in practice, still emerges from systems, not models alone.
2. “Agent” became an overloaded term – and agent-washing followed
One pattern that stood out in 2025 was how broadly the word “agent” started getting applied.
In many conversations, “agent” referred to:
- Scripted automation
- Workflow orchestration
- RPA extensions
- Chatbots
A distinction that feels important to restate clearly:
An agent is an actor with agency – the ability to perceive its environment, pursue goals, reason, plan, and adapt.
Without AI-driven reasoning, planning, and adaptation, it’s typically automation, orchestration, or a non-agentic workflow – not a true AI agent.
Blurring this line inflated expectations and created architectural ambiguity. It also made it harder to decide when an agentic approach was genuinely needed versus when a well-designed workflow would have been more appropriate.
Agents are powerful but they are selective tools, not default building blocks.
3. Adding agents without re-thinking processes often increases fragility
Throughout the year, multiple industry voices – including academic research and analyst insights reinforced a consistent message:
Agentic systems require process transformation, not just technical integration.
In many early implementations, speed of adoption took precedence over depth of redesign. While understandable given the pace of innovation, this often resulted in duplicated logic, unclear ownership, harder audits, and limited long-term ROI.
The lesson wasn’t that agents failed – but that process readiness matters as much as model capability.
4. Toward a human–agent hybrid workforce
Looking ahead, one idea I find increasingly compelling is the notion of a human–agent hybrid workforce. A future where humans and agents work in harmony, each playing to their strengths.
For that to work, agents cannot remain invisible background program. Agents should not perform task on a human users behalf or a System’s behalf only. They will need their own identities, scoped authority, traceable actions, and auditable outcomes.
Encouragingly, the industry has already begun thinking in this direction through workload identity, non-human identity, and zero-trust models (Microsoft Entra ID for agents is a good example). Adoption is still early, but this is an area I’m actively watching and optimistic about as systems mature.
5. Markup as a quiet control plane for agents
One subtle but important shift I noticed in 2025 is how markup is becoming a first-class interface for agent behavior.
Patterns such as AGENTS.md, copilot-instructions.md, and structured Agent skill (SKILL.md) definitions are moving intent out of ad-hoc prompts and into explicit, version-controlled artifacts.
This mirrors how software matured:
- APIs through OpenAPI
- infrastructure through IaC
- and now agents through instruction-as-artifact
It also changes who participates: developers design schemas and constraints, while domain experts increasingly author intent in a form agents can reliably parse.
6. Code execution, discovery, and disciplined context
Another design shift that resonated with me was the direction around code execution with MCP, including recent work highlighted by Anthropic.
What stood out wasn’t execution alone – but tool discovery.
Instead of flooding agents with every possible tool (and exhausting context windows), agents can discover the right capability, load only the relevant contract, and execute in a constrained environment. This treats context as a scarce resource and aligns agent design with well-understood distributed-systems principles.
Why 2026 feels like a transition year
What makes 2026 feel different is not just incremental model progress, but the convergence of standards, interaction patterns, and user-experience thinking around agents.
Alongside communication protocols and execution standards, there is a clear signal that text-only agent interactions are reaching their limits.
Frameworks and patterns such as MCP-UI, AG-UI, Google’s A2UI, intent-driven application surfaces (for example, emerging ideas like OpenAI’s Apps SDK), and MCP Apps all point to the same realization:
Agents need to show, not just tell.
Structured, interactive responses rather than long text make agent behavior more intuitive, inspectable, and trustworthy. This shift breaks the “text wall” that has dominated agent experiences so far and begins to reshape not just UI, but application architecture, microservices boundaries, and orchestration patterns underneath.
Signals I’m paying attention to in 2026
As we head into 2026, a few themes stand out to me:
- Understanding how agentic standards and architectural patterns are evolving. Exploring what an equivalent set of guiding principles might look like for decision-makers, to help reason more clearly about when agents add value and when simpler approaches are more effective
- Reimagining agent–user interaction beyond text through agentic UI and intent-based interfaces
- Deep-diving into how these interaction models reshape application architecture, cloud engineering and overall enterprise architecture landscape
- Sharing awareness by synthesizing industry research, emerging patterns, and real-world signals and translating them into clearer mental models for decision-makers, so agentic use cases are grounded in value, risk, and ROI rather than momentum
Not to slow adoption but to improve clarity and decision quality as organizations goes live with agentic systems for the first time.
I also want to watch closely to see whether standard patterns or frameworks emerge for making business processes compatible with agentic systems beyond adding agents on top of legacy workflows.
This is less about technology and more about transformation – and it’s where the real leverage lies.
What this leaves me with
2025 showed that agentic AI is not a shortcut – it’s a shift in how we design systems.
The next phase won’t be defined by how autonomous agents become, but by how thoughtfully we integrate them into processes, interfaces, and organizations.
That’s the lens I’m carrying into 2026.
Disclaimer: The views expressed in this article are my own and may not reflect those of my employer; they are informed by industry observation, technology trends, and research.



