Modern Network Infrastructure: 3 Principles for Connected Data and AI-Ready Architecture
Description: In this series we’re covering key insights into the four digital infrastructure foundations: Compute, Network, Data, and Security. This blog features what it takes to build modern Network Infrastructure. Here we explore next-generation network infrastructure that enables enterprises to outpace competitors with dependable, secure connections across the globe.
As IT leaders prepare their network infrastructure for existing and emerging challenges, one crucial point is managing growing data and connectivity demands from artificial intelligence (AI) applications for the global enterprise.
Craig Waldrop, Vice President, Business Development Connectivity, Digital Realty discusses the network foundation.
Emerging statistics highlight the impact of AI on IT infrastructure demands, including:
- More than three-quarters (79%) of companies experience network latency in managing AI workloads. (Cisco, Cisco AI Readiness Index, 2023)
- Global companies say most of their IT locations need to support AI/machine learning requirements. (Digital Realty, Global Data Insights Survey, 2022)
- AI’s data storage and computing needs will stretch IT budgets and existing resources.
Increasingly, the question becomes how to properly steward resources to keep their enterprise connected now and in the future.
This blog focuses on practical and accessible strategic thinking for data and AI-ready architectures. These challenges highlight the need to migrate from a legacy infrastructure to a modern one. This type of infrastructure is designed to position global companies with the network, security, compute, and data resources to compete and succeed.
In this blog post, we will also discuss:
- The pain points IT leaders face with transitioning to modern IT infrastructure
- Critical strategy for network infrastructure: local ingress and egress for colocation settings
- Enabling the local ingress/egress network infrastructure strategy
Let’s dive in.
Identify pain points stifling global enterprise growth
You may have run into two points of resistance in your transition from legacy to modern network infrastructure:
- Inflexibility of existing solutions. One example of this problem is the inability to dynamically connect between cloud and on-prem to evaluate the best location for strategic workloads.
- The high cost of expansion into new markets or growth in existing markets. Centralized growth in the data economy calls for higher compute costs with the addition of every point of presence, access point, or location at which multiple networks or communication devices share a connection. Getting data to a central location as a company expands farther geographically then negatively affects the enterprise as it grows globally.
From a broad standpoint, nearly every migration has these pain points.
What makes the move from legacy infrastructure to modern network infrastructure different is the speed and volume of demands AI workloads place on the migration process.
As enterprises race to fulfill internal and external expectations for AI-enabled solutions, the challenge of building a manageable network forces IT organizations to iterate and deploy almost simultaneously.
To help IT leaders clear out the noise of user and customer demands, we need to define the key area of struggle. The struggle is to connect data, users, and applications to resources with solid network infrastructure performance.
In the world of data centers, we’ve seen these struggles take the form of network management.
Our clients often gain a sense of control using software-defined networking (SDN) and network performance monitoring tools, which enable them to proactively ensure reliable connectivity.
While reducing latency is crucial for network infrastructure, these requirements are forming a new standard:
- Localization of access to data
- Standardized network deployments
- Any-to-any interconnection
Next, we’ll look at the local ingress/egress strategy and why it makes the most sense for migration from legacy to modern network infrastructure for global enterprises.
Critical strategy: Create local points of network ingress and egress in colocation
Why choose a local ingress and egress strategy instead of a centralized network architecture? Central network control dramatically increases the cost of network monitoring. By contrast, local ingress and egress monitoring (or distributed monitoring) allows for custom requirements tailored to needs at the point of data consumption.
This still allows for a roll-up of data to global, high-level network monitoring.
Here’s an example:
Company A has a centralized network model:
- Users are distributed globally
- One central data center
- Users, applications, and data are in one location, away from the point of presence
In this model, security, network, and data are in one place for quick analysis, review, and risk mitigation. However, the drawback is the impact on global enterprises. With users distributed worldwide, vulnerabilities in network performance form unique and ever-changing landscapes.
Alternatively, Company B embraces the localized strategy, which:
- Leverages data centers closest to points of presence, reducing compute costs
- Enhances Company B’s workflows and user experience by locating network ingress and egress in closer proximity to users, applications, and ‘things’
- Leverages the local expertise of experienced data center operators that can tailor solutions based on regional requirements, decreasing security risks
Next, we’ll drill down into the enablement of modern network architecture, exploring what it takes to put the local ingress/egress strategy into action.
Enable four key tactics for modern network infrastructure
Enablement of the localized network strategy requires a solid partner to tie all the moving pieces together. With that advantage, enterprises can still benefit from controlling and oversight of network monitoring without the compute cost penalty for company growth that comes with centralization.
We’ve found four areas of concentration:
- Combine and localize network traffic. This gives enterprises the flexibility to grow by avoiding the compute costs associated with centralized network topologies.
- Segment and tier traffic. Segmentation gives certain traffic priority, reducing network congestion, and network performance issues.
- Interconnect network, cloud, and service providers. Aligning with a partner that can execute network and cloud interconnection seamlessly can be valuable for IT leaders. Digital Realty offers this partnership through a variety of resources, including ServiceFabric™ Connect.
- Deploy, interconnect, and host SDN edge. In the high-stakes AI environment, having an airtight approach to deployment, network interconnectivity, and SDN is critical. Localization of these three components shores up seamless deployments with network reliability and security.
Modern network infrastructure enablement requires a partner well-positioned for high-density deployments. Digital Realty’s PlatformDIGITAL® is an open, yet secure, platform that reduces network infrastructure complexity and provides customers with a secure data "meeting place" and a proven Pervasive Datacenter Architecture (PDx™) solution methodology for powering innovation and efficiently managing Data Gravity challenges.
Discover more in Digital Realty’s latest eBook, “Are You Data and AI Ready?”