Over the past few years, I have worked across a number of Azure environments that were originally built during the rapid migration period between 2020 and 2023. At that time, most organizations were focused on speed. Infrastructure needed to move quickly, remote access had to be stable, and workloads had to be accessible from anywhere.
In many cases, lift-and-shift was the chosen approach. Virtual machines were migrated with minimal redesign. Autoscale was enabled in some environments, but often without detailed modelling around user concurrency or login behavior. Cost optimization and governance were usually planned as follow-up activities once the environment was live.
At the time, that approach made sense. Availability and continuity were the priority.
Now in 2026, I am seeing a consistent pattern. Organizations are revisiting their Azure environments and reassessing decisions that were made during the initial migration phase. This is not because Azure has failed, but because the original designs were created under pressure.
One of the most common areas being reviewed is session host density in Azure Virtual Desktop and Citrix environments running in Azure.
In several environments I have assessed:
- Density targets were technically correct based on vendor guidance
- Hosts were routinely running at maximum intended capacity during business hours
- Buffer capacity was not maintained in practice
When additional users attempted to log on during peak periods, new hosts would begin powering on. There was often a delay before those machines were available to accept sessions. During that window, login times increased, and performance complaints began to surface.
From a monitoring perspective, nothing appeared “broken”, but the environment was effectively operating at its upper limit with no operational headroom.
This is where autoscale is sometimes misunderstood. Autoscale reacts to thresholds; it does not predict demand. If an environment is allowed to run consistently at maximum density, it is operating without resilience. The experience remains acceptable until demand exceeds expectation, at which point users feel the impact before additional capacity becomes available.
Another pattern I am seeing relates to cost optimization exercises. In an effort to reduce Azure spend, environments are resized, user density is increased, or buffer capacity is reduced. While these actions can lower monthly costs, they can also introduce instability if concurrency patterns are not reviewed at the same time.
A more effective approach has been to model cost against concurrent user capacity rather than focusing purely on virtual machine pricing. When cost is evaluated in terms of productive user sessions, decisions around density and buffer capacity become more balanced. In most cases, maintaining a small amount of additional headroom provides more stability than pushing infrastructure to theoretical limits.
Hybrid architecture is also being reconsidered. Early cloud discussions often assumed full migration as the objective. In reality, some workloads benefit from a deliberate hybrid design.
Common considerations now include:
- Identity placement and dependency mapping
- Domain controller proximity
- Network latency impact on login performance
- Workloads better suited to fixed infrastructure
Retaining certain components on-premises is not necessarily a sign of incomplete migration. When done intentionally, it can simplify design and improve reliability.
FinOps practices have matured as well. Initially, cost reporting was often separate from operational management. Increasingly, organizations are embedding cost reviews into regular operational cadence.
This typically includes:
- Scheduled right-sizing reviews
- Storage lifecycle enforcement
- Savings plan alignment to real concurrency
- SKU rationalisation based on observed usage
Security posture is another area of reassessment. In some early Azure deployments, identity roles and network configurations were inherited directly from on-premises models. As environments mature, those assignments are being reviewed and simplified. Clearer role boundaries and better segmentation are reducing risk while also making environments easier to manage.
Across all of these examples, the consistent theme is intent.
In contrast, environments that continue to treat Azure as a direct extension of the data centre tend to experience greater volatility, both operationally and financially.
The shift I am seeing in 2026 is not about leaving Azure. It is about operating Azure more deliberately.
Questions that are becoming more relevant in design workshops include:
- What is our actual concurrent user pattern throughout the day
- What density are we targeting versus what density are we actually running?
- How much buffer do we maintain during peak login periods
- How long does autoscale take to make additional capacity available?
- Are we reviewing VM sizing on a regular cadence?
- Is our security model explicitly designed for Azure, or inherited from previous patterns?
These are not complex questions, but they do require accurate data and disciplined review.
Azure provides flexibility, but flexibility without modelling can introduce unpredictability. The organizations that are achieving stability in 2026 are those that are taking the time to reassess architecture decisions made under pressure several years ago.
Cloud adoption was accelerated out of necessity. Cloud maturity is now a deliberate exercise.
For many organizations, this year is less about expanding Azure and more about refining it.
If you would like to review your Azure environment and move towards a more intentional, resilient design, contact us to start the conversation.






