Key Points
- Cisco and NVIDIA expand AI infrastructure partnership to cover data centres and edge locations
- New architecture claims to reduce AI deployment timelines from months to weeks
- Framework supports NVIDIA RTX PRO 4500 Blackwell GPUs for edge computing workloads
Cisco and NVIDIA have expanded their infrastructure partnership to help organisations deploy artificial intelligence systems across data centres, cloud platforms and edge locations, the companies announced on Monday.
The expansion of what Cisco calls its Secure AI Factory with NVIDIA provides a framework for running AI workloads closer to where data is generated, rather than routing everything through centralised data centres. The approach, the companies claim, can compress deployment timelines from months to weeks while building security into the infrastructure from the outset.
For Indian enterprises operating across multiple locations, from manufacturing plants to retail outlets, the framework addresses a practical challenge. AI systems that analyse data in real time, whether for quality control on a factory floor or customer behaviour in a store, often cannot tolerate the delay of sending data to a distant data centre for processing.
Cisco-NVIDIA AI partnership
“Most organisations understand the potential for AI to transform their businesses, but they’re navigating how to deploy the technology safely and at scale,” said Cisco CEO Chuck Robbins. “In partnership with NVIDIA, we’re solving that challenge with an architecture that sets a new standard for performance.”
NVIDIA CEO Jensen Huang, said security needed to be embedded at every layer of AI infrastructure. “Together, NVIDIA and Cisco are building the secure foundation for AI infrastructure, core to edge, so companies can scale intelligence with confidence,” Huang stated.
The partnership targets three categories of customers: enterprises building their own AI capabilities, cloud service providers, and telecommunications companies looking to offer AI services to business clients.
A significant portion of the announcement focuses on edge computing, which refers to processing data at or near its source rather than in a centralised data centre. This matters for applications where even milliseconds of delay can affect outcomes, such as autonomous systems, real-time video analysis or industrial automation.
Cisco said its server products, including the Cisco UCS and Unified Edge portfolios, will now support NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs. A GPU, or graphics processing unit, is a specialised processor originally designed for rendering images but now widely used for AI computations because it can handle many calculations simultaneously.
Advertisement
The Blackwell series represents NVIDIA’s latest generation of AI-focused chips. By supporting these processors in edge hardware, Cisco enables organisations to run AI workloads at local sites without requiring the power consumption and physical space of data centre equipment.
For telecommunications service providers, Cisco announced a reference design called Cisco AI Grid with NVIDIA. This combines Cisco’s network management platform with NVIDIA’s Blackwell GPUs, allowing telecom operators to offer managed AI services to enterprise customers using their existing network infrastructure.
Data centre switches for large-scale AI
The partnership also addresses requirements for large AI installations, sometimes called AI factories, that process vast amounts of data for training and running AI models.
Cisco announced high-speed network switches capable of handling 102.4 terabits per second, powered by NVIDIA Spectrum-6 Ethernet switch silicon. A terabit equals one trillion bits of data. For context, this single switch can theoretically handle data equivalent to streaming roughly 20 lakh high-definition videos simultaneously.
The company also said its 800 gigabit switches using NVIDIA Spectrum-4 silicon are now generally available for purchase.
Cisco Nexus Hyperfabric, a software platform for managing data centre networks, will support these switches as part of a broader management system called Cisco Nexus One. The company said this would simplify deployment by providing a pre-validated configuration rather than requiring IT teams to integrate components from multiple vendors independently.
Both companies emphasised that security features are integrated into the architecture rather than added as a separate layer. As AI systems increasingly make autonomous decisions and interact with other systems, protecting the underlying models and data becomes critical.
Cisco stated it is embedding protection into the Secure AI Factory to guard against external threats, though specific security features were not detailed in the announcement.
Organisations building large AI installations now have two validated configurations to choose from. One follows the NVIDIA Cloud Partner programme’s reference architecture. The other uses Cisco’s own silicon, called Silicon One, following similar design principles.
Indian organisations
For Indian enterprises, the announcement is relevant as more companies explore AI deployment beyond pilot projects. The framework could benefit sectors such as manufacturing, healthcare and retail where processing data locally offers advantages over cloud-dependent approaches.
Service providers and data centre operators in India may also find the architecture useful as demand grows for sovereign AI infrastructure, systems where data remains within national boundaries to comply with regulatory requirements.
Cisco and NVIDIA did not announce specific pricing or availability dates for the Indian market. The companies also did not disclose which Indian customers, if any, are currently testing or deploying the expanded framework.
Your Questions, Answered
What is the Cisco Secure AI Factory with NVIDIA?
It is a framework that allows organisations to deploy AI systems across data centres, cloud platforms and edge locations. The architecture integrates Cisco networking equipment with NVIDIA processors and aims to simplify deployment while building security into the infrastructure.
What is edge computing in the context of AI?
Edge computing refers to processing data at or near its source rather than sending it to a centralised data centre. For AI applications, this reduces delay and enables real-time analysis for tasks like video monitoring or industrial automation.
Which NVIDIA GPUs does the expanded partnership support?
The partnership now supports NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs for edge deployments and NVIDIA Spectrum-6 silicon for high-speed data centre switches capable of 102.4 terabits per second.
How does this partnership benefit Indian organisations?
Indian enterprises in manufacturing, healthcare and retail can use the framework to run AI workloads locally. Service providers may use it to build sovereign AI infrastructure where data remains within national boundaries for regulatory compliance.

