
I work for a mid-sized business that continues to grow and utilizes a lot of bandwidth. While we had a 6513 in our core that continued to operate just fine, it was beginning to show it's age. We had maxed out the 10Gig capacity and really had need of chassis redundancy in our core. We already had Nexus 5000's in our Data Center as well as Nexus 1000v in our virtual environments, however using Nexus as your core routers is a completely new challenge. I had spent several weeks reading up on vPC limitations and the advantages Nexus 7000 has with certain FHRPs but actually doing it, after more than a decade of installing only Catalyst switches into the core of networks, was a new challenge. This is my first, and perhaps last post but I think that an actual working design and configs may bring some value to those of you out there who, like me, have a little network know-how but little or no experience with Nexus.
The image above is the actual design of our network (PDF link below.) I've also attached a PDF document of the config of each Nexus 7Ks below.
The final design of our network is 2 Nexus 7009's with Sup 2s and F2 (1/10Gb cards) in the core of our network. Connected to them we have the following:
1x 5508 Wireless Controller
2x 5585X ASA Firewalls (connected with a port channel to each and Static Routes which is necessary)
6x Stacks of 3750X access switches in different building IDFs
1x Stack of 3750X switches, connected via routed links in another building
2x Nexus 5010s (along with a hand full of FEXs) in our Data Center
2x MPLS Routers for WAN Sites sharing EIGRP routes with the Nexus' (These are also running CUBE for our SIP trunks)
Here is a link to the PDF version of the design diagram.
Network Diagram PDF
Here is the link to the annotated configuration file.
Nexus 7000 Configuration PDF
This comment has been removed by the author.
ReplyDeleteNot sure what you are/were doing with your 3750s but I did the same thing with stacks of 2960Xs in campus closets. Also put the default gateways for the closet voice and data vlans on the 7K since 100% of the traffic is going to route there anyway. This also removed L3 requirements (cost and routing config) from the closet gear and allowed the QOS and ACLs to be placed in just one place.
Delete5 years later with 2 data centers, 35,000 ports, 5000+ staff and 8,000 wifi users with 4 UCS clusters and 4 different Voip products ... working great.
I have the same architecture. It is layer 2 to the edge but when I began to roll out the access closets (with 3750Gs) the 29xx didn't have a stacking capability. Later when upgrading to 3750X, the 29xx series had no 10Gig uplink options.
Deletevery interesting stuff, except i wonder can i do the same design and get away with out using mpls routers using layer 3 module on 5K? I have BGP requirement to be located on the actual 5k.
ReplyDeleteA Nexus can run BGP, if you have the Layer 3 module. I'm interested to hear why you would have to run MPLS into your 5Ks. Also, if you use VPCs you will want that to be two connections, not just one.
ReplyDeleteOutstanding post. I am grateful for this blog to distribute knowledge about this significant topic. Otherwise if any one who want to do a diploma or master in web design at High Technologies Solutions (HTS). Contact us on 9311002620 or visit:- https://htsindia.com/Courses/web-and-graphic-designing/web-designing-training-course
ReplyDelete