Campus Design – Part 1: Physical Topology | Dell EMC Networking

I thought it would be useful to do a few posts once in a while, with more of a campus focus.

With any design workshop, what is always applicable, is that it is vital to qualify deeply and comprehensively, before proposing a solution. Equally, it is never a good practice to go in reverse & try to “adopt” the requirements to a solution. Business Outcomes and Technical objectives should be determined, and allowed to guide the components of the design itself.

On the technical front, there are questions around the following aspects which help in clarifying a lot of the details around the objectives. So, things like:

  1. Scalability: (Headroom, expansion, growth – what capacity to provision upfront?)
  2. Reliability: (What level of Redundancy is needed. what is the tolerance for failure/downtime, what are the critical apps?)
  3. Performance: (what bandwidth at the edge and in the Core, what levels of over-subscription)
  4. Flexibility: (Multi-rate ports, Modular form factors, Dis-aggregated/Open platforms instead of vertically integrated ones)
  5. Management: (unified? Intuitive? Analytics? Ease of execution?)
  6. Compliance: (PCI etc.)

Beyond establishing the objectives – it is usual to find a mix and match of both the Campus (N-Series) and the DC (S/Z Series)  range of switches in a campus deployment. S-Series could be used typically in Core or Server Aggregation. With the availability of S3048 and S3148, we have have 1G switches for both the DC (S3048-ON) and Campus Wiring closet (S3148) within the S-Series range. Both ranges run DNOS9, facilitating its feature set (VLT, VRF-lite etc.). S3148 for campus, is PoE enabled. On the N-Series Front, there is a rich portfolio of user edge/access switches in 1G, PoE, 24/28 port varieties.

Usually, the initial scoping and qualification should establish the baseline features which are requisite. This will be accompanied by qualifying port requirements and subscription levels from the edge to northbound tier, whether Core or distribution. For e.g.

  • If dual active control plane, Layer 2 Multi-path in the Core is required, it will call for VLT, and consequently DNOS 9 based (S-Series) switches.
  • If stacking depth of more than 6 units is required, DNOS 6 based N-Series will need to be considered.
  • If support for 10 Mb, half duplex is needed, N-Series will need to be considered.
  • if Web UI on the switch itself, is needed (not talking about NMS like OMNM), N-Series will need to be considered.
  • If a chassis based Solution (with Rapid Access Nodes) is needed, the C-Series Chassis will need to be considered.

The following is a sample HLD which captures an S-Series 10G Core & Server Aggregation, and options for 1G at Access/User Edge via N-Series. Do note the depicted options are only a subset of those available. there are other switches in the range, and I do intend to cover the different options for campus edge in more detail in the next post.

 

Campus HLD physical - hasanmansur.com.png

There are times when it is preferable to have the same range footprint across the campus, or in a specific design block. Typically, it also helps with the spares – if you standardize across the campus, then it is possible to keep a spare switch that can replace a failed node in any location, while one waits for the cover/replacement under warranty,  to arrive. it also means the use of the same OS – OS9 or OS6, across the board (OS10 will soon start showing up as well in Core or Aggregation). Another angle to consider is the Management. While Open Manage Network Manager allows comprehensive capabilities for the wired solutions (S and N Series), if Aerohive based wireless access is present in the network, then one has to look at Hive Manager NG for the Management of the wireless. HMNG’s support is limited to Aerohive and N-Series only (some caveats exist), the S-Series is not covered. Therefore, the design for the right Management tool has to be given some thought.

The second part of this post will cover the logical topology that goes with the physical HLD. I will have a brief, high level summary of the different switches and their typical deployment.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s