Apologies for the images!

Somehow the images i had were links to other images internal to Cisco.

ill try to fix up the existing posts with actual image files instead of links. hopefully it wont take to long. possibly by the end of July.

Please continue to reference “unofficial ACI guide” in the mean time.

https://unofficialaciguide.com

GOLF + Transit Routing

Introduction

 

Recently, there have been some questions regarding GOLF and Tenant Level L3out. Specifically, is it possible to share routes from a Tenant Level L3out to the GOLF/WAN routers.

 

This article will cover configuration for a single, simple use-case.

 

Prerequisites

 

  • Must have GOLF integration already up and running
    • This guide is only for the transit configuration. There are other guides for setting up GOLF

 

Configure

 

Network Diagram

 

Configurations

 

  1. Ensure tenant VRF is ready for GOLF
    • Enable Opflex and name the VRF
  2. Screen Shot 2017-12-13 at 2.43.51 PM.png
  3. Create GOLF Dummy L3 out
    • Select Tenant VRF
    • add Consumer Label
  4. Screen Shot 2017-12-13 at 10.14.52 AM.png
  5. Since this was a new VRF on the GOLF router side, some config needed to be done for this exercise
    1. Create a new loopback under the new VRF
    2. Assign IP
    3. under the BGP process, advertise the new loopback as EVPN
      • interface loopback88
          vrf member golf-dp
          ip address 88.88.88.1/32
        
        interface loopback88
          vrf member golf-dp
        vrf context golf-dp
          vni 1504098
          rd auto
          address-family ipv4 unicast
            route-target import 65000:19267588
            route-target import 65000:19267588 evpn
            route-target import 1999:1999
            route-target import 1999:1999 evpn
            route-target export 65000:19267588
            route-target export 65000:19267588 evpn
            route-target export 1999:1999
            route-target export 1999:1999 evpn
          address-family ipv6 unicast
            route-target import 65000:19267588 evpn
            route-target export 65000:19267588 evpn
        router bgp 80
          vrf golf-dp
            address-family ipv4 unicast
              network 88.88.88.1/32 evpn
              advertise l2vpn evpn
              label-allocation-mode per-vrf
            address-family ipv6 unicast
              advertise l2vpn evpn
              label-allocation-mode per-vrf
        ipp tenant golf-dp 24
  6.  Under the VRF in ACI, Configure “BGP Context Per Address Family” and “BGP Route Target Profiles”
  7. Configure the other L3 out in the Tenant.
    • In this article, OSPF was used.

At this point, regular Transit routing configuration is used between the GOLF Dummy L3out and the Tenant OSPF L3 out.

 

  1. At the GOLF Dummy L3 out, use the regular “External Subnet for External EPG” Flag for the prefixes/subnets from the WAN
  2. At the OSPF Tenant L3 out, use the regular “External Subnet for External EPG” Flag for the prefixes/subnets from the OSPF routed domain

This next section is the actual redistribution configuration between OSPF and Golf.

 

  1. At the GOLF Dummy L3 out, use “Export Route Control Subnet” Flag on the prefixes/subnets from the OSPF routed domain you wish to export towards the GOLF/WAN router
  2. Likewise, at the OSPF Tenant L3 out, use “Export Route Control Subnet” Flag on the prefixes/subnets from the GOLF/WAN to export/advertise down towards the OSPF routed domain.

Here is a diagram of how that configuration works for this particular article:

 

Screen Shot 2017-12-14 at 10.25.08 AM.png

 

Screen Shot 2017-12-14 at 10.26.56 AM.pngScreen Shot 2017-12-14 at 10.27.08 AM.png

 

Finally, make sure there is a contract between the two L3 outs.

 

Verify

 

Regular routing, Transit Routing, and GOLF verification can be used here

 

 

ACI-7710-IPN(config)# show ip route vrf golf-dp 
IP Route Table for VRF "golf-dp"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>

77.77.77.1/32, ubest/mbest: 1/0
    *via 10.0.20.64%default, [20/5], 00:01:34, bgp-80, external, tag 65000 (evpn
), segid: 2490372 tunnelid: 0xa001440 encap: VXLAN
 
88.88.88.1/32, ubest/mbest: 2/0, attached
    *via 88.88.88.1, Lo88, [0/0], 02:04:32, local
    *via 88.88.88.1, Lo88, [0/0], 02:04:32, direct
ACI-7710-IPN(config)# 

ACI-7710-IPN# show bgp l2vpn evpn vrf golf-dp
BGP routing table information for VRF default, address family L2VPN EVPN
BGP table version is 155, Local Router ID is 192.168.3.2
Status: s-suppressed, x-deleted, S-stale, d-dampened, h-history, *-valid, >-best
Path type: i-internal, e-external, c-confed, l-local, a-aggregate, r-redist, I-i
njected
Origin codes: i - IGP, e - EGP, ? - incomplete, | - multipath, & - backup

   Network            Next Hop            Metric     LocPrf     Weight Path
Route Distinguisher: 192.168.1.101:3
*>e[5]:[0]:[0]:[32]:[77.77.77.1]:[0.0.0.0]/224
                      10.0.20.64               5                     0 65000 ?

Route Distinguisher: 192.168.1.101:17
*>e[5]:[0]:[0]:[24]:[172.16.1.0]:[0.0.0.0]/224
                      10.0.0.34                                      0 65000 i
*>e[5]:[0]:[0]:[24]:[172.16.2.0]:[0.0.0.0]/224
                      10.0.0.34                                      0 65000 i

Route Distinguisher: 192.168.3.2:4    (L3VNI 1504096)
*>e[5]:[0]:[0]:[24]:[172.16.1.0]:[0.0.0.0]/224
                      10.0.0.34                                      0 65000 i
*>e[5]:[0]:[0]:[24]:[172.16.2.0]:[0.0.0.0]/224
                      10.0.0.34                                      0 65000 i

Route Distinguisher: 192.168.3.2:9    (L3VNI 1504097)
*>l[5]:[0]:[0]:[32]:[10.11.12.13]:[0.0.0.0]/224
                      192.168.3.2                       100      32768 i

Route Distinguisher: 192.168.3.2:10    (L3VNI 1504098)
*>e[5]:[0]:[0]:[32]:[77.77.77.1]:[0.0.0.0]/224
                      10.0.20.64               5                     0 65000 ?
*>l[5]:[0]:[0]:[32]:[88.88.88.1]:[0.0.0.0]/224
                      192.168.3.2                       100      32768 i

ACI-7710-IPN# 


a-spine1# vsh -c "show bgp vpnv4 un 88.88.88.1 vrf overlay-1"
BGP routing table information for VRF overlay-1, address family VPNv4 Unicast
Route Distinguisher: 10.0.20.64:4
BGP routing table entry for 0.0.0.0/0, version 616 dest ptr 0xab8456c4
Paths: (1 available, best #1)
Flags: (0x000002 00000000) on xmit-list, is not in urib
Multipath: eBGP iBGP

  Advertised path-id 1
  Path type: internal 0x40000018 0x40 ref 0, path is valid, is best path
  AS-Path: NONE, path sourced internal to AS
    10.0.20.64 (metric 2) from 10.0.20.64 (10.0.20.64)
      Origin incomplete, MED 1, localpref 100, weight 0
      Received label 0
      Received path-id 1
      Extcommunity: 
          RT:65000:2916352
          VNID:2916352
          COST:pre-bestpath:162:110

  Path-id 1 advertised to peers:
    10.0.20.67         192.168.2.101      192.168.2.102  

BGP routing table information for VRF overlay-1, address family VPNv4 Unicast
Route Distinguisher: 10.0.20.67:4
BGP routing table entry for 0.0.0.0/0, version 28 dest ptr 0xab847730
Paths: (1 available, best #1)
Flags: (0x000002 00000000) on xmit-list, is not in urib
Multipath: eBGP iBGP

  Advertised path-id 1
  Path type: internal 0x40000018 0x40 ref 0, path is valid, is best path
  AS-Path: NONE, path sourced internal to AS
    10.0.20.67 (metric 2) from 10.0.20.67 (10.0.20.67)
      Origin incomplete, MED 1, localpref 100, weight 0
      Received label 0
      Received path-id 1
      Extcommunity: 
          RT:65000:2916352
          VNID:2916352
          COST:pre-bestpath:162:110

  Path-id 1 advertised to peers:
    10.0.20.64         192.168.2.101      192.168.2.102  

BGP routing table information for VRF overlay-1, address family VPNv4 Unicast
Route Distinguisher: 192.168.1.101:3    (VRF GOLF-dp:golf-dp)
BGP routing table entry for 88.88.88.1/32, version 9 dest ptr 0xaad01a50
Paths: (1 available, best #1)
Flags: (0x0c0002 00000000) on xmit-list, is not in urib, exported
  vpn: version 1981, (0x100002) on xmit-list
Multipath: eBGP iBGP

  Advertised path-id 1, VPN AF advertised path-id 1
  Path type: external 0xc0000028 0x0 ref 56506, path is valid, is best path, remote nh not installed
             Imported from 192.168.3.2:10:[5]:[0]:[0]:[32]:[88.88.88.1]:[0.0.0.0]/120 
  AS-Path: 80 , path sourced external to AS
    192.168.3.2 (metric 2) from 192.168.3.2 (192.168.3.2)
      Origin IGP, MED not set, localpref 100, weight 0
      Received label 1504098
      Extcommunity: 
          RT:65000:2490372
          COST:pre-bestpath:164:2147483648
          ENCAP:8
          Router MAC:547f.eeec.af42
          VNID:1504098

  VRF advertise information:
  Path-id 1 not advertised to any peer

  VPN AF advertise information:
  Path-id 1 advertised to peers:
    10.0.20.64         10.0.20.67     

a-leaf103# show ip route vrf GOLF-dp:golf-dp
IP Route Table for VRF "GOLF-dp:golf-dp"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>

4.4.4.0/24, ubest/mbest: 1/0, attached, direct
    *via 4.4.4.103, vlan229, [1/0], 00:01:54, direct
4.4.4.103/32, ubest/mbest: 1/0, attached
    *via 4.4.4.103, vlan229, [1/0], 00:01:54, local, local
43.43.43.43/32, ubest/mbest: 2/0, attached, direct
    *via 43.43.43.43, lo14, [1/0], 00:01:54, local, local
    *via 43.43.43.43, lo14, [1/0], 00:01:54, direct
46.46.46.46/32, ubest/mbest: 1/0
    *via 10.0.20.64%overlay-1, [1/0], 00:01:53, bgp-65000, internal, tag 65000
77.77.77.1/32, ubest/mbest: 1/0
    *via 4.4.4.1, vlan229, [110/5], 00:01:14, ospf-default, intra
88.88.88.1/32, ubest/mbest: 1/0
    *via 192.168.3.2%overlay-1, [200/0], 00:01:53, bgp-65000, internal, tag 80
a-leaf103# 


ACI-5548-B(config)# show ip ospf neigh vrf golf-dp
ospf-27: Unknown vrf golf-dp
ospf-10: Unknown vrf golf-dp
 OSPF Process ID 88 VRF golf-dp
 Total number of neighbors: 2
 Neighbor ID     Pri State            Up Time  Address         Interface
 43.43.43.43       1 FULL/DROTHER     00:00:28 4.4.4.103       Vlan2280
 46.46.46.46       1 FULL/BDR         00:00:27 4.4.4.100       Vlan2280
ACI-5548-B(config)#
ACI-5548-B(config)#
ACI-5548-B(config)#
ACI-5548-B(config)# show ip route vrf golf-dp
IP Route Table for VRF "golf-dp"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>

4.4.4.0/24, ubest/mbest: 1/0, attached
    *via 4.4.4.1, Vlan2280, [0/0], 00:01:52, direct
4.4.4.1/32, ubest/mbest: 1/0, attached
    *via 4.4.4.1, Vlan2280, [0/0], 00:01:52, local
43.43.43.43/32, ubest/mbest: 1/0
    *via 4.4.4.103, Vlan2280, [110/41], 00:00:31, ospf-88, intra
46.46.46.46/32, ubest/mbest: 1/0
    *via 4.4.4.100, Vlan2280, [110/41], 00:00:31, ospf-88, intra
77.77.77.1/32, ubest/mbest: 2/0, attached
    *via 77.77.77.1, Lo44, [0/0], 01:29:23, local
    *via 77.77.77.1, Lo44, [0/0], 01:29:23, direct
88.88.88.1/32, ubest/mbest: 2/0
    *via 4.4.4.100, Vlan2280, [110/1], 00:00:31, ospf-88, type-2, tag 4294967295,
    *via 4.4.4.103, Vlan2280, [110/1], 00:00:31, ospf-88, type-2, tag 4294967295,
ACI-5548-B(config)#


ACI-5548-B(config)# ping 88.88.88.1 vrf golf-dp source 77.77.77.1
PING 88.88.88.1 (88.88.88.1) from 77.77.77.1: 56 data bytes
64 bytes from 88.88.88.1: icmp_seq=0 ttl=253 time=3.335 ms
64 bytes from 88.88.88.1: icmp_seq=1 ttl=253 time=2.377 ms
64 bytes from 88.88.88.1: icmp_seq=2 ttl=253 time=2.489 ms
64 bytes from 88.88.88.1: icmp_seq=3 ttl=253 time=2.465 ms
64 bytes from 88.88.88.1: icmp_seq=4 ttl=253 time=2.488 ms

--- 88.88.88.1 ping statistics ---
5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min/avg/max = 2.377/2.63/3.335 ms
ACI-5548-B(config)#

 

 

Q-in-Q for EPGs

Introduction

 

Ebro, or ACI 3.0 will support Q-in-Q classification into regular EPGs. This feature is only available with FX hardware as only that ASIC can classify traffic with double tags into an EPG.

 

The newer platforms (93180YC-FX for example) will classify traffic based on the combination of Outer and Inner VLAN tag such as (outer, inner). Each classification will be unique based on the combination such that

  • (x, y) = EPG1
  • (x, z) = EPG2
  • (y, x) = EPG3

The egress port can be any other port in the same EPG or another EPG in ACI. There is no restriction for the egress port to be another Q-in-Q double tagged port. This means that for 3.0, ACI will be able to mix multiple VLAN, VXLAN, and now double VLAN.

For more information on this feature, check out the external documentation:

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/2-x/L2_config/b_Cisco_APIC_Layer_2_Configuration_Guide/b_Cisco_APIC_Layer_2_Configuration_Guide_chapter_01101.html

Prerequisites

 

Requirements

  • ACI 3.0/13.0
  • New Hardware (FX only)
  • Cannot do per-port-vlan feature and double tags
  • Cannot mix single tag and double tag on a double-tagged port
  • Cannot mix traditional Q-in-Q tunneling on a double-tagged port

Configure

 

  • Only two small changes in the GUI were implemented on existing policies:
    • L2 Interface Policy = added “doubleQtagPort”
  • Static Paths in the GUI have a dropdown to specify “qinq” instead of “VLAN

We will now walk through the steps to an interface in ACI for Q-in-Q tagging for EPG classification.

Configure access policies as one normally would for where the device sending in q-in-q frames will be connected. This can be either an Access Port, PC, vPC

At the Interface Policy Group, create a new “L2 Interface Policy” for this particular use case. Best practice dictates this policy should have a generic name in order to support the reusability of the policy.

 

Make sure “doubleQtagPort” is select, this will enable the q-in-q classification into EPG compared to the other options

 

Option Explanation
corePort Regular Q-in-Q tunneling through the fabric. This option enables multiple tunnels on the same interface
Disabled Disables all forms of Q-in-Q
doubleQtagPort Used for Q-in-Q for EPG classification
edgePort Regular Q-in-Q tunneling through the fabric. This option allows only a single tunnel per interface

 

The final Interface Policy Group should look similar to this:

 

 

Tenant Configuration

 

Moving on to the tenant configuration begin by navigating to the EPG that will host the endpoint sending Q-in-Q traffic. Associate the domain as one normally would and click to create a new static path:

Click the dropdown for “Port Encap” and select QinQ, this will change the entire wizard

 

Fill in the two expected tags and hit submit!

 

 

Verify

 

Since the main part of the configuration involves changing the L2 policy for the interface, let’s check if the change took place:

rtp-f2-p1-leaf6# show int e1/47
Ethernet1/47 is down (inactive)
admin state is up, Dedicated Interface
  Hardware: 1000/10000/25000/auto Ethernet, address: 00a2.ee28.c358 (bias 00a2.ee28.c358)
  MTU 9000 bytes, BW 0 Kbit, DLY 1 usec
  reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation ARPA, medium is broadcast
  Port mode is trunk-doubleEncapPort
  full-duplex, 1000 Mb/s, media type is 1G
  FEC (forward-error-correction) : disable-fec
  Beacon is turned off
  Auto-Negotiation is turned on
  Input flow-control is off, output flow-control is off
  Auto-mdix is turned off
  Rate mode is dedicated
  Switchport monitor is off
  EtherType is 0x8100
  EEE (efficient-ethernet) : n/a
  Last link flapped 17:51:11
  Last clearing of "show interface" counters never
  4 interface resets
  30 seconds input rate 0 bits/sec, 0 packets/sec
  30 seconds output rate 0 bits/sec, 0 packets/sec
  Load-Interval #2: 5 minute (300 seconds)
    input rate 0 bps, 0 pps; output rate 0 bps, 0 pps
  RX
    982 unicast packets  1662 multicast packets  17 broadcast packets
    2661 input packets  404262 bytes
    0 jumbo packets  0 storm suppression bytes
    0 runts  0 giants  0 CRC  0 no buffer
    0 input error  0 short frame  0 overrun   0 underrun  0 ignored
    0 watchdog  0 bad etype drop  0 bad proto drop  0 if down drop
    0 input with dribble  0 input discard
    0 Rx pause
  TX
    0 unicast packets  4747 multicast packets  0 broadcast packets
    4747 output packets  701383 bytes
    0 jumbo packets
    0 output error  0 collision  0 deferred  0 late collision
    0 lost carrier  0 no carrier  0 babble  0 output discard
    0 Tx pause

Same can be done from the Object Model with a query for l1PhysIf

The Static Path can be verified using fvRsPathAtt

 

finally, the actual deployed VLANs can be verified as well using:

rtp-f2-p1-leaf6# show vlan id 33 ex

 VLAN Name                             Status    Ports                           
 ---- -------------------------------- --------- ------------------------------- 
 33   dpita-tenant:dpita-AP:dpita-     active    Eth1/47 
      vmotion                                            

 VLAN Type  Vlan-mode  Encap                                                         
 ---- ----- ---------- -------------------------------                               
 33   enet  CE         qinq-2359-2360                                                
rtp-f2-p1-leaf6# 

This has created a new type of concrete object qinqCktEp

 

 

 

Troubleshoot

 

coming soon

 

ACI 3.0!

Its here!

https://software.cisco.com/download/release.html?mdfid=285968390&softwareid=286278832&release=3.0(1k)&relind=AVAILABLE&rellifecycle=&reltype=latest

and the release notes (always important)

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/release/notes/apic_rn_301.html

Check out all the new features! and this little gem too

Cisco ACI Multi-Site Controller! a lot more on this later!

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/aci_multi-site/sw/1x/release_notes/Cisco_ACI_Multi-Site_RN_101.html

Have a nice day!

 

Inter-VRF Transit Routing

Introduction

As of 2.2.2 code, Transit routing is now allowed between L3 Outs in different VRFs. In other words, Inter-VRF Transit Routing / Route leaking between L3 outs/VRFs

 

Requirements

  • ACI 2.2.2 (Danube MR1)
  • Two VRFs
  • One L3 out in each VRF

 

Configure

 

Network Diagram

Screen Shot 2017-06-17 at 8.37.09 AM.png

 

 

 

 

 

 

Configurations

  1. Create two different L3 outs in two different VRFs (different tenant or same tenant, doesn’t matter)
  2. Ensure both L3 outs, individually, are working. Neighborships are up and routes are coming in and out.
  3. Now the fun part:

Traditionally, with transit routing, Subnets on the ingress L3 out are marked with “external subnet for external EPG” so that the source VRF and InstP can apply policy. The remote or egress L3 out will need an entry for the same subnet but marked as “export route control subnet”. That flag is the essential “transit routing” flag. The inverse must be done as well.

 

In our example topology above, dpita-tenant:dpita-context is learning 123.123.254.0/24 from the outside. This subnet is marked as “external subnet for external epg”. In the egress VRF dpita-tenant:2600, this subnet is marked as “export route control subnet” and the transit routing config is complete.

 

WIth Inter-VRF transit routing, 123.123.254.0/24 will be marked with:

  • External subnet for external EPG = standard for subnets being learned from the outside. applies policy for anything in the same VRF
  • Shared route control subnet = enables the subnet to be route-leaked
  • Shared security import subnet = applies the correct pcTag of the InstP

In the remote VRF, nothing changes, 123.123.254.0/24 is marked for:

  • Export route control subnet.

The config is now repeated for 55.55.254.0/24 in its source VRF dpita-tenant:2600:

  • External subnet for external EPG
  • Shared route control subnet
  • Shared security import subnet

In the remote VRF, 55.55.254.0/24 is marked for “export route control subnet”

 

the InstPs should look something like this:

Screen Shot 2017-06-17 at 9.54.34 AM.png

 

 

Screen Shot 2017-06-17 at 9.55.31 AM.png

 

 

With any route-leaking configuration, the final piece of the puzzle will be the contract.

  • Create a global scoped contract
  • Have one InstP provide
  • The other InstP consumes

 

Verify

 

 

rtp-f1-p1-leaf1# show ip route vrf dpita-tenant:dpita-context
IP Route Table for VRF "dpita-tenant:dpita-context"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%' in via output denotes VRF 

10.10.4.0/24, ubest/mbest: 1/0, attached, direct, pervasive
    *via 10.0.208.66%overlay-1, [1/0], 1d19h, static, tag 4294967295
10.10.4.1/32, ubest/mbest: 1/0, attached, pervasive
    *via 10.10.4.1, vlan12, [1/0], 1d19h, local, local
10.10.13.0/24, ubest/mbest: 1/0, attached, direct, pervasive
    *via 10.0.208.66%overlay-1, [1/0], 1d19h, static, tag 4294967295
10.10.13.1/32, ubest/mbest: 1/0, attached, pervasive
    *via 10.10.13.1, vlan18, [1/0], 1d19h, local, local
11.11.11.11/32, ubest/mbest: 2/0, attached, direct
    *via 11.11.11.11, lo5, [1/0], 1d01h, local, local
    *via 11.11.11.11, lo5, [1/0], 1d01h, direct
55.55.254.0/24, ubest/mbest: 1/0
    *via 20.0.216.93%overlay-1, [200/401], 19:20:55, bgp-1, internal, tag 1
123.123.254.0/24, ubest/mbest: 1/0
    *via 192.168.44.254, vlan22, [110/44], 21:21:49, ospf-default, intra
192.168.4.0/24, ubest/mbest: 1/0, attached, direct, pervasive
    *via 10.0.208.66%overlay-1, [1/0], 1d19h, static
192.168.4.1/32, ubest/mbest: 1/0, attached, pervasive
    *via 192.168.4.1, vlan49, [1/0], 1d19h, local, local
192.168.13.0/24, ubest/mbest: 1/0, attached, direct, pervasive
    *via 10.0.208.66%overlay-1, [1/0], 1d19h, static, tag 4294967295
192.168.13.1/32, ubest/mbest: 1/0, attached, pervasive
    *via 192.168.13.1, vlan9, [1/0], 1d19h, local, local
192.168.44.0/24, ubest/mbest: 1/0, attached, direct
    *via 192.168.44.253, vlan22, [1/0], 1d01h, direct
192.168.44.253/32, ubest/mbest: 1/0, attached
    *via 192.168.44.253, vlan22, [1/0], 1d01h, local, local
192.168.130.0/24, ubest/mbest: 1/0, attached, direct, pervasive
    *via 10.0.208.66%overlay-1, [1/0], 1d19h, static, tag 4294967295
192.168.130.1/32, ubest/mbest: 1/0, attached, pervasive
    *via 192.168.130.1, vlan16, [1/0], 1d19h, local, local
192.168.131.0/24, ubest/mbest: 1/0, attached, direct, pervasive
    *via 10.0.208.66%overlay-1, [1/0], 1d19h, static, tag 4294967295
rtp-f1-p1-leaf1# 



aci-n3k-1-bootcamp# show ip route vrf dpita-tenant:dpita-context 
IP Route Table for VRF "dpita-tenant:dpita-context"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%' in via output denotes VRF 

11.11.11.11/32, ubest/mbest: 1/0
    *via 192.168.44.253, Vlan2356, [110/41], 21:22:50, ospf-dpita-tenant, intra
55.55.254.0/24, ubest/mbest: 1/0
    *via 192.168.44.253, Vlan2356, [110/1], 19:21:56, ospf-dpita-tenant, type-2,
 tag 4294967295
123.123.254.0/24, ubest/mbest: 1/0, attached
    *via 123.123.254.1, Vlan2357, [0/0], 1d00h, direct
123.123.254.1/32, ubest/mbest: 1/0, attached
    *via 123.123.254.1, Vlan2357, [0/0], 1d00h, local
192.168.44.0/24, ubest/mbest: 1/0, attached
    *via 192.168.44.254, Vlan2356, [0/0], 1d00h, direct
192.168.44.254/32, ubest/mbest: 1/0, attached
    *via 192.168.44.254, Vlan2356, [0/0], 1d00h, local
aci-n3k-1-bootcamp#

 

 

rtp-f1-p2-leaf1# show ip route vrf dpita-tenant:2600
IP Route Table for VRF "dpita-tenant:2600"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%' in via output denotes VRF 

4.4.4.1/32, ubest/mbest: 1/0
    *via 18.18.18.1, eth1/95, [110/401], 21:24:19, ospf-default, intra
18.18.18.0/30, ubest/mbest: 1/0, attached, direct
    *via 18.18.18.2, eth1/95, [1/0], 1d00h, direct
18.18.18.2/32, ubest/mbest: 1/0, attached
    *via 18.18.18.2, eth1/95, [1/0], 1d00h, local, local
19.19.19.0/30, ubest/mbest: 1/0
    *via 18.18.18.1, eth1/95, [110/401], 21:24:19, ospf-default, intra
26.26.26.26/32, ubest/mbest: 2/0, attached, direct
    *via 26.26.26.26, lo5, [1/0], 1d00h, local, local
    *via 26.26.26.26, lo5, [1/0], 1d00h, direct
55.55.254.0/24, ubest/mbest: 1/0
    *via 18.18.18.1, eth1/95, [110/401], 21:24:19, ospf-default, intra
123.123.254.0/24, ubest/mbest: 1/0
    *via 10.0.32.95%overlay-1, [200/44], 19:20:32, bgp-1, internal, tag 1
rtp-f1-p2-leaf1# 



2600#show ip route
Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP
       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area 
       N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
       E1 - OSPF external type 1, E2 - OSPF external type 2
       i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
       ia - IS-IS inter area, * - candidate default, U - per-user static route
       o - ODR, P - periodic downloaded static route

Gateway of last resort is not set

     19.0.0.0/30 is subnetted, 1 subnets
C       19.19.19.0 is directly connected, FastEthernet0/1
     18.0.0.0/30 is subnetted, 1 subnets
C       18.18.18.0 is directly connected, FastEthernet0/0
     4.0.0.0/24 is subnetted, 1 subnets
C       4.4.4.0 is directly connected, Loopback4
     55.0.0.0/24 is subnetted, 1 subnets
C       55.55.254.0 is directly connected, Loopback55
     26.0.0.0/32 is subnetted, 1 subnets
O       26.26.26.26 [110/2] via 18.18.18.2, 21:26:19, FastEthernet0/0
     123.0.0.0/24 is subnetted, 1 subnets
O E2    123.123.254.0 [110/1] via 18.18.18.2, 19:22:33, FastEthernet0/0
     44.0.0.0/24 is subnetted, 1 subnets
C       44.44.44.0 is directly connected, Loopback0
2600#


GBL_C++:  [INFO]          sclass: 0011


module-1# show system internal aclqos prefix
3014660 0.0.0.0        ffffffff 37    15     FALSE FALSE
3014660 55.55.254.0    ff       37    17     FALSE FALSE

Shared Addr    Mask     Scope Class  RefCnt

============== ======== ===== ====== ======
123.123.254.0  ff       0     10935  1  


rtp-f1-p2-leaf1# show zoning-rule src-epg 17 
Rule ID         SrcEPG          DstEPG          FilterID        operSt          Scope           Action                              Priority       
=======         ======          ======          ========        ======          =====           ======                              ========       
4442            17              10935           5               enabled         3014660         permit                              fully_qual(6)  
rtp-f1-p2-leaf1#

 

 

Troubleshoot

 

Packet Flow

source – 55.55.254.1 (leaf 121)

dest – 123.123.254.1 (leaf 111)

Screen Shot 2017-06-17 at 10.57.57 AM.png

 

 

 

High Level

 

  1. ping ingress on leaf 121
  2. elam trigger on NS
  3. elam RW goes to bcm for routing lookup
  4. elam trigger on NS with dst MAC 0c
  5. check encap ptr and ip for dest leaf.

 

<1 and 2>
GBL_C++:  [MSG]   - l2vec0 is complete
GBL_C++:  [INFO]        ce_da: 0022BDF819FF
GBL_C++:  [INFO]        ce_sa: 000C85B86860

GBL_C++:  [MSG]   - l3vec0 is complete
GBL_C++:  [INFO]      ip_da: 0000000000000000000000007B7BFE01
GBL_C++:  [INFO]      ip_sa: 0000000000000000000000003737FE01

GBL_C++:  [MSG]   - pktrw is complete
GBL_C++:  [INFO]        loopback: 1
GBL_C++:  [INFO]       uc_routed: 1
GBL_C++:  [INFO]    ol_encap_idx: 0002
GBL_C++:  [INFO]        ol_segid: 2E0004
GBL_C++:  [INFO]         ol_mark: 1
GBL_C++:  [INFO]           ol_lb: 1
GBL_C++:  [INFO]           ol_dl: 1
GBL_C++:  [INFO]            ol_e: 0
GBL_C++:  [INFO]           ol_sp: 1
GBL_C++:  [INFO]           ol_dp: 1
GBL_C++:  [INFO]          sclass: 0011
GBL_C++:  [INFO]          vpc_df: 1
GBL_C++:  [INFO]      brcm_proxy: 1
<3 and 4>

rtp-f1-p2-leaf1# bcm-shell-hw "l3 defip show" | grep 123.123
2160  3        123.123.254.0/24     00:00:00:00:00:00 100095    0     0     0    0 n
2161  37       123.123.254.0/24     00:00:00:00:00:00 100097    0     0     0    0 y

rtp-f1-p2-leaf1# bcm-shell-hw "l3 egress show" | grep 100097
Entry  Mac                 Vlan INTF PORT MOD MPLS_LABEL ToCpu Drop RefCount L3MC
100097  00:0c:0c:0c:0c:0c 4059 5014    17    4        -1   no   no    1   no


module-1(NS-elam-insel3)# set outer ipv4 src_ip 55.55.254.1 dst_ip 123.123.254.1

module-1(NS-elam-insel3)# set outer l2 dst_mac 000c.0c0c.0c0c


GBL_C++:  [MSG]   - l2vec0 is complete
GBL_C++:  [INFO]        ce_da: 000C0C0C0C0C
GBL_C++:  [INFO]        ce_sa: 000C85B86860

GBL_C++:  [MSG]   - l3vec0 is complete
GBL_C++:  [INFO]      ip_da: 0000000000000000000000007B7BFE01
GBL_C++:  [INFO]      ip_sa: 0000000000000000000000003737FE01

GBL_C++:  [MSG]   - pktrw is complete
GBL_C++:  [INFO]        loopback: 0
GBL_C++:  [INFO]       uc_routed: 1
GBL_C++:  [INFO]         adj_vld: 1
GBL_C++:  [INFO]       adj_index: 0044
GBL_C++:  [INFO]    ol_encap_idx: 3004
GBL_C++:  [INFO]        ol_segid: 2E0004
GBL_C++:  [INFO]         ol_mark: 1
GBL_C++:  [INFO]           ol_lb: 1
GBL_C++:  [INFO]           ol_dl: 1
GBL_C++:  [INFO]            ol_e: 0
GBL_C++:  [INFO]           ol_sp: 1
GBL_C++:  [INFO]           ol_dp: 1
GBL_C++:  [INFO]      brcm_proxy: 0


module-1(NS-elam-insel3)# show platform internal ns forwarding encap 0x3004

======================================================================================================================================================
                          TABLE INSTANCE : 0
======================================================================================================================================================
Legend
MD: Mode (LUX & RWX)        LB: Loopback
LE: Loopback ECMP           LB-PT: Loopback Port
ML: MET Last                TD: TTL Dec Disable
DV: Dst Valid               DT-PT: Dest Port
DT-NP: Dest Port Not-PC     ET: Encap Type
OP: Override PIF Pinning    HR: Higig DstMod RW
HG-MD: Higig DstMode        KV: Keep VNTAG
------------------------------------------------------------------------------------------------------------------------------------------------------
      M PORT L L LB MET  M T D DT DT E TST O H HG K M E
POS   D FTAG B E PT PTR  L D V PT NP T IDX P R MD V D T Dst MAC           DIP
------------------------------------------------------------------------------------------------------------------------------------------------------
12292 0  800 0 1  0    0 0 0 0  0  0 3   0 0 0  0 0 0 3 00:00:00:00:00:00 10.0.32.95     

======================================================================================================================================================
                          TABLE INSTANCE : 1
======================================================================================================================================================
Legend
MD: Mode (LUX & RWX)        LB: Loopback
LE: Loopback ECMP           LB-PT: Loopback Port
ML: MET Last                TD: TTL Dec Disable
DV: Dst Valid               DT-PT: Dest Port
DT-NP: Dest Port Not-PC     ET: Encap Type
OP: Override PIF Pinning    HR: Higig DstMod RW
HG-MD: Higig DstMode        KV: Keep VNTAG
------------------------------------------------------------------------------------------------------------------------------------------------------
      M PORT L L LB MET  M T D DT DT E TST O H HG K M E
POS   D FTAG B E PT PTR  L D V PT NP T IDX P R MD V D T Dst MAC           DIP
------------------------------------------------------------------------------------------------------------------------------------------------------
12292 0  fff 0 1  0    0 0 0 0  0  0 3   0 0 0  0 0 0 3 00:00:00:00:00:00 10.0.32.95     
module-1(NS-elam-insel3)# 


rtp-f1-p2-leaf1# acidiag fnvread | grep 10.0.32.95
     111        1      rtp-f1-p1-leaf1      SAL1819SAN6      10.0.32.95/32    leaf         active   0
rtp-f1-p2-leaf1#

Netflow!

Introduction

 

Prerequisites

 

Any topology is possible for Netflow, so long as the BD, or ports to be monitored are deployed on the Cloud Scale ASIC (9300EX ToRs) switch.

 

For Netflow on the DVS, in-band management is required! the DVS does not support flow level filtering.

 

Configure

 

The first step is to enable Netflow globally. The default setting is for Tetration analytics, we need to change to Netflow.

 

Navigate to Fabric > Fabric Policies > Switch Policies > Fabric Node Controls > default

 

Here, click on Netflow and then submit.

 

Screen Shot 2016-10-18 at 7.22.51 PM.png

 

 

1.1      Configuring Netflow Monitoring for a BD.

 

Under a Tenant, navigate to Networking > Bridge Domains > BD_Name > Policy > Advanced/Troubleshooting

 

Screen Shot 2016-10-20 at 11.07.34 AM.png

 

Click on the plus symbol to begin the configuration. For this test, we can proceed with “ipv4 type” under Netflow IP Filter type, the next column over, click the dropdown and select Create Flow Monitor

 

Screen Shot 2016-12-01 at 8.55.50 AM.png

 

Name the Flow Monitor, Associate a flow record policy and finally, a flow exporter.

 

 

Screen Shot 2016-12-01 at 8.56.16 AM.png

 

Screen Shot 2016-12-01 at 8.59.09 AM.pngScreen Shot 2016-12-01 at 8.59.30 AM.png

 

 

 

When Defining a Flow Exporter, the source IP can be any address, it will be used by ACI to send the packets. In this example, 5.5.5.5/20 is used as the source.

 

NOTE: at the time of writing, a minimum of 12 host bits (/20) is the required length for the source. This is due to ACI inserting the leaf ID into the last 12 bits in the source address. The leaf ID is used to distinguish the source of the packet if the same exporter source IP is configured on multiple leafs.

 

The default Netflow port (2055) is also manually entered here.

 

The destination IP address can be an endpoint in an EPG or an IP address behind an L3 out. Make sure to select version 9 for template support. Finally, enter the EPG and VRF where the endpoint is learned.

 

Screen Shot 2016-12-01 at 9.03.00 AM.png

 

Screen Shot 2016-12-01 at 9.03.37 AM.png

 

Click submit here. The BD should now look similar to this:

 

Screen Shot 2016-12-01 at 9.04.13 AM.png

 

1.2      Verification.

For this first use case, the configuration is complete by creating the flow monitor that is applied to the BD, which has a flow record and flow exporter associated.

 

To verify the configuration is active, connect to the 9300-EX leaf switch and use the “show flow” set of commands. Use the “esc esc” context sensitive help to view all options available

 

 

rtp-f2-p1-leaf6# show flow
 cache       Show Netflow Exporter Cache                       
 exporter    Show Netflow Exporter Configuration and Statistics
 hw-profile  Hardware Profile                                  
 interface   Flow interface information                       
 internal    Show internal nfm information                     
 monitor     Show Monitor Configuration                        
 record      Show Record Configuration                         
 timers      Show Timer Values                                 
 vlan        Flow vlan information

 

 

To check if the flow monitor is deployed:

 

rtp-f2-p1-leaf6# show flow monitor
Flow Monitor default:
    Use count: 2
    Flow Record: default
Flow Monitor dpita-tenant:dpita-flow-monitor:
    Use count: 1
    Flow Record: dpita-tenant:dpita-flow-record
    Bucket Id: 1
    Flow Exporter: dpita-tenant:dpita-exporter
 
Feature Prio: Netflow
 
rtp-f2-p1-leaf6#

 

 

 

The output shows that the default flow monitor is always present but the customer dpita-flow-monitor has a record and exporter configured.

 

Next, the record configuration can be checked

 

 

rtp-f2-p1-leaf6# show flow record
Flow record default:
    No. of users: 1
    Template ID: 0
    Fields:
Flow record dpita-tenant:dpita-flow-record:
    No. of users: 1
    Template ID: 256
    Fields:
        match ipv4 source address
        match ipv4 destination address
        match ip protocol
        match transport source-port
        match transport destination-port
 
Feature Prio: Netflow
 
rtp-f2-p1-leaf6#
 

 

 

 

The exporter policy is next to confirm the source and destination of the Netflow capture

 

rtp-f2-p1-leaf6# show flow exporter
Flow exporter dpita-tenant:dpita-exporter:
    Destination: 10.10.4.250
    VRF: dpita-tenant:dpita-context (1)
    Destination UDP Port 2055
    Source: 5.5.0.216
    DSCP 44
    Export Version 9
        Sequence number 21
        Data template timeout 0 seconds
    Exporter Statistics
        Number of Flow Records Exported 42
        Number of Templates Exported 21
        Number of Export Packets Sent 21
        Number of Export Bytes Sent 3080
        Number of Destination Unreachable Events 0
        Number of No Buffer Events 0
        Number of Packets Dropped (No Route to Host) 0
        Number of Packets Dropped (other) 0
        Number of Packets Dropped (Output Drops) 0
        Time statistics were last cleared: Never
 
Feature Prio: Netflow
 
rtp-f2-p1-leaf6#

 

 

 

Notice the last octet of the source of the exported traffic. It matches the node ID of this particular leaf. The destination port, destination server, context and EPG are all listed. The version as well as some statistics are also listed.

 

 

rtp-f2-p1-leaf6# acidiag fnvread
10.0.200.89/32    leaf         active   0
     216        1      rtp-f2-p1-leaf6 10.0.200.88/32    leaf         active   0
    2101        1     rtp-f2-p1-spine1 10.0.200.94/32   spine         active   0
    2102        1     rtp-f2-p1-spine2 10.0.200.93/32   spine         active   0
 
Total 8 nodes
 
rtp-f2-p1-leaf6# cat /mit/sys/summary
# System
address          : 10.0.200.88
bootstrapState   : none
childAction      :
configIssues     :
currentTime      : 2016-10-21T11:01:14.241-04:00
dn               : sys
etepAddr         : 0.0.0.0
fabricDomain     : tsi-fab2-rtp
fabricId         : 1
fabricMAC        : 00:22:BD:F8:19:FF
id               : 216
inbMgmtAddr      : 172.18.242.121
inbMgmtAddr6     : ::
lcOwn            : local
modTs            : 2016-10-20T11:08:16.944-04:00
mode             : unspecified
monPolDn         : uni/fabric/monfab-default
name             : rtp-f2-p1-leaf6
nameAlias        :
oobMgmtAddr      : 10.122.254.159
oobMgmtAddr6     : ::
podId            : 1
remoteNetworkId  : 0
remoteNode       : no
rn               : sys
role             : leaf
serial           :
state            : in-service
status           :
systemUpTime     : 00:23:57:54.000
rtp-f2-p1-leaf6#
 
The VLAN/BD where Netflow is running can also be seen
 
rtp-f2-p1-leaf6# show flow vlan
VLAN ID 11; BD Encap 15990734:
    Monitor(IPv4): dpita-tenant:dpita-flow-monitor
    Direction: Input

 

 

 

As well as the interfaces and some statistics

 

 

rtp-f2-p1-leaf6# show flow interface
Interface port-channel3:
    Monitor(IPv4): default
    Direction: Input
Interface Ethernet1/25:
    Monitor(IPv4): default
    Direction: Input
 
Feature Prio: Netflow

 

Screen Shot 2016-12-01 at 9.05.49 AM.png

 

On the exporter, a simple tool to view the Netflow packets is wireshark. Simply begin a capture on the interface and filter by “cflow”

 

Below, two flows are seen. Once TCP flow on port 5001 and another ICMP flow.

 

Screen Shot 2016-12-01 at 9.06.20 AM.png

 

1.3      Access Port Netflow.

 

Netflow can also be configured at an interface level as well as under the BD. To do so, navigate to Fabric > Access Policies > Global Policies > Analytics

 

Once again, creating a Netflow monitor, record and exporter policy is required.

 

Finally, attach the new Netflow Monitor to an Interface Policy Group

 

Screen Shot 2016-12-01 at 9.08.43 AM.png

 

1.4      Verification

Same as before, at the switch CLI the same “show flow” commands can be used to verify. Here is an example showing the two different export policies configured.

 

 

 
rtp-f2-p1-leaf6# show flow exporter
Flow exporter dpita-tenant:dpita-exporter:
    Destination: 10.10.4.250
    VRF: dpita-tenant:dpita-context (1)
    Destination UDP Port 2055
    Source: 5.5.0.216
    DSCP 44
    Export Version 9
        Sequence number 261
        Data template timeout 0 seconds
    Exporter Statistics
        Number of Flow Records Exported 1499
        Number of Templates Exported 258
        Number of Export Packets Sent 261
        Number of Export Bytes Sent 69356
        Number of Destination Unreachable Events 0
        Number of No Buffer Events 0
        Number of Packets Dropped (No Route to Host) 0
        Number of Packets Dropped (other) 0
        Number of Packets Dropped (Output Drops) 0
        Time statistics were last cleared: Never
Flow exporter dpita-access-monitor:
    Destination: 10.10.4.250
    VRF: dpita-tenant:dpita-context (1)
    Destination UDP Port 2055
    Source: 1.1.0.216
    DSCP 44
    Export Version 9
        Sequence number 159
        Data template timeout 0 seconds
    Exporter Statistics
        Number of Flow Records Exported 1257
        Number of Templates Exported 159
        Number of Export Packets Sent 159
        Number of Export Bytes Sent 53368
        Number of Destination Unreachable Events 0
        Number of No Buffer Events 0
        Number of Packets Dropped (No Route to Host) 0
        Number of Packets Dropped (other) 0
        Number of Packets Dropped (Output Drops) 0
        Time statistics were last cleared: Never
 
Feature Prio: Netflow
 
rtp-f2-p1-leaf6#

 

 

2        Netflow Monitoring Configuration.

 

REMINDER: For Netflow on the DVS, in-band management is required!

 

The first step is to enable Netflow globally. The default setting is for Tetration analytics, we need to change to Netflow.

 

Navigate to Fabric > Fabric Policies > Switch Policies > Fabric Node Controls > default

 

Here, click on Netflow and then submit.

 

Screen Shot 2016-12-01 at 9.10.26 AM.png

 

2.1      Configuring Netflow Monitoring for a VMM Domain.

 

Navigate to your VMM Domain, VM Networking > Domain_Name > Policy/General > VSwitch Policies and create a VMM Exporter Policy.

 

Screen Shot 2016-12-01 at 9.10.59 AM.png

 

Input the name of the exporter, source IP address, destination port (2055 is netflow) and the destination IP.
The destination IP MUST be reachable through inband management. Click submit

 

Screen Shot 2016-12-01 at 9.11.25 AM.png

 

After clicking submit, there should be some new options under the VSwitch Policies for timers and sampling rate.

 

Screen Shot 2016-12-01 at 9.12.07 AM.png

 

 

Click submit on the VMM domain after making those changes.

2.2      Tenant Configuration

 

Navigate to the tenant EPG in use for the lab by clicking on Tenant > Application Profile_Name > Application EPGs>  EPG_Name and click on the “Domains” folder.

 

Here, either associate the VMM domain for the first time or modify the VMM domain association to enable Netflow on the VMM domain.

 

Option 1)

Adding the VMM domain with Netflow Enabled:

 

Screen Shot 2016-12-01 at 9.12.39 AM.png

 

Option 2)

After its already associated

 

Screen Shot 2016-12-01 at 9.13.07 AM.png

 

2.3      Verification.

The VMM exporter can be verified through the NXOS-style CLI on the APIC as well as on the vCenter portgroup

 

 

rtp-f2-p1-apic1# show flow vmm-exporter dpita-exporter-vmm
 
 Exporter Name                     dstAddr          Port   srcAddr        
 --------------------------------  ---------------  -----  ---------------
 dpita-exporter-vmm                10.122.254.224   2055   3.3.3.3        
rtp-f2-p1-apic1#
 
rtp-f2-p1-leaf6# show flow
 cache       Show NetFlow Exporter Cache                       
 exporter    Show NetFlow Exporter Configuration and Statistics
 hw-profile  Hardware Profile                                  
 interface   Flow interface information                       
 internal    Show internal nfm information                     
 monitor     Show Monitor Configuration                        
 record      Show Record Configuration                         
 timers      Show Timer Values                                 
 vlan        Flow vlan information

 

 

Screen Shot 2016-12-01 at 9.13.53 AM.png

 

 

 

 

 

On the exporter, a simple tool to view the Netflow packets is wireshark. Simply begin a capture on the interface and filter by “cflow”

 

Screen Shot 2016-12-01 at 9.14.29 AM.png

 

Have some pesky __ui objects? Want to delete them?

Introduction

Customers who use the CLI or Basic GUI and then use the advance GUI will have a ton of these __ui objects causing havoc for troubleshooting. There is a simple easy script to run to delete them

 

Solution

from an APIC, change directory to

/mit/uni/infra

then run this script

 

>for i in `find *__ui*`
> do 
> echo "removing $i" 
> modelete $i
> done
>moconfig commit

Then they will disappear!!

 

admin@rtp-f1-p1-apic1:infra> find *__ui*
attentp-__ui_l121_eth1--95
attentp-__ui_l121_eth1--95/mo
attentp-__ui_l121_eth1--95/dompcont
attentp-__ui_l121_eth1--95/dompcont/assocdomp-[uni--l3dom-dpita-2600]
attentp-__ui_l121_eth1--95/dompcont/assocdomp-[uni--l3dom-dpita-2600]/summary
attentp-__ui_l121_eth1--95/dompcont/summary
attentp-__ui_l121_eth1--95/summary
attentp-__ui_l121_eth1--95/nscont
attentp-__ui_l121_eth1--95/nscont/rstoEncapInstDef-[allocencap-[uni--infra]--encapnsdef-[uni--infra--vlanns-[dpita-vlan-pool]-dynamic]].link
attentp-__ui_l121_eth1--95/nscont/rstoEncapInstDef-[allocencap-[uni--infra]--encapnsdef-[uni--infra--vlanns-[dpita-vlan-pool]-dynamic]]
attentp-__ui_l121_eth1--95/nscont/rstoEncapInstDef-[allocencap-[uni--infra]--encapnsdef-[uni--infra--vlanns-[dpita-vlan-pool]-dynamic]]/source-[uni--l3dom-dpita-2600]
attentp-__ui_l121_eth1--95/nscont/rstoEncapInstDef-[allocencap-[uni--infra]--encapnsdef-[uni--infra--vlanns-[dpita-vlan-pool]-dynamic]]/source-[uni--l3dom-dpita-2600]/summary
attentp-__ui_l121_eth1--95/nscont/rstoEncapInstDef-[allocencap-[uni--infra]--encapnsdef-[uni--infra--vlanns-[dpita-vlan-pool]-dynamic]]/summary
attentp-__ui_l121_eth1--95/nscont/summary
attentp-__ui_l121_eth1--95/rtattEntP-[uni--infra--funcprof--accportgrp-__ui_l121_eth1--95]
attentp-__ui_l121_eth1--95/rsdomP-[uni--l3dom-dpita-2600].link
attentp-__ui_l121_eth1--95/rsdomP-[uni--l3dom-dpita-2600]
attentp-__ui_l121_eth1--95/rsdomP-[uni--l3dom-dpita-2600]/mo
attentp-__ui_l121_eth1--95/rsdomP-[uni--l3dom-dpita-2600]/summary
hpaths-__ui_l111_eth1--1
hpaths-__ui_l111_eth1--1/rsHPathAtt-[topology--pod-1--paths-111--pathep-[eth1--1]].link
hpaths-__ui_l111_eth1--1/rsHPathAtt-[topology--pod-1--paths-111--pathep-[eth1--1]]
hpaths-__ui_l111_eth1--1/rsHPathAtt-[topology--pod-1--paths-111--pathep-[eth1--1]]/mo
hpaths-__ui_l111_eth1--1/rsHPathAtt-[topology--pod-1--paths-111--pathep-[eth1--1]]/summary
hpaths-__ui_l111_eth1--1/mo
hpaths-__ui_l111_eth1--1/summary
hpaths-__ui_l111_eth1--13
hpaths-__ui_l111_eth1--13/rsHPathAtt-[topology--pod-1--paths-111--pathep-[eth1--13]].link
hpaths-__ui_l111_eth1--13/mo
hpaths-__ui_l111_eth1--13/rsHPathAtt-[topology--pod-1--paths-111--pathep-[eth1--13]]
hpaths-__ui_l111_eth1--13/rsHPathAtt-[topology--pod-1--paths-111--pathep-[eth1--13]]/mo
hpaths-__ui_l111_eth1--13/rsHPathAtt-[topology--pod-1--paths-111--pathep-[eth1--13]]/summary
hpaths-__ui_l111_eth1--13/summary
hpaths-__ui_l121_eth1--95
hpaths-__ui_l121_eth1--95/rsHPathAtt-[topology--pod-2--paths-121--pathep-[eth1--95]]
hpaths-__ui_l121_eth1--95/rsHPathAtt-[topology--pod-2--paths-121--pathep-[eth1--95]]/mo
hpaths-__ui_l121_eth1--95/rsHPathAtt-[topology--pod-2--paths-121--pathep-[eth1--95]]/summary
hpaths-__ui_l121_eth1--95/mo
hpaths-__ui_l121_eth1--95/summary
hpaths-__ui_l121_eth1--95/rspathToAccBaseGrp.link
hpaths-__ui_l121_eth1--95/rspathToAccBaseGrp
hpaths-__ui_l121_eth1--95/rspathToAccBaseGrp/mo
hpaths-__ui_l121_eth1--95/rspathToAccBaseGrp/summary
hpaths-__ui_l121_eth1--95/rsHPathAtt-[topology--pod-2--paths-121--pathep-[eth1--95]].link
admin@rtp-f1-p1-apic1:infra> 
admin@rtp-f1-p1-apic1:infra> 
admin@rtp-f1-p1-apic1:infra> 
admin@rtp-f1-p1-apic1:infra> 
admin@rtp-f1-p1-apic1:infra> 
admin@rtp-f1-p1-apic1:infra> find *__ui* | wc -l
44
admin@rtp-f1-p1-apic1:infra> for i in `find *__ui*`
> do 
> echo "removing $i" 
> modelete $i
> done

moremoving attentp-__ui_l121_eth1--95
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
removing attentp-__ui_l121_eth1--95/mo
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Invalid rn string mo under class infra.AttEntityP
Error executing command, check logs for details
removing attentp-__ui_l121_eth1--95/dompcont
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Mo 'attentp-__ui_l121_eth1--95/dompcont' cannot be deleted
removing attentp-__ui_l121_eth1--95/dompcont/assocdomp-[uni--l3dom-dpita-2600]
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Mo 'attentp-__ui_l121_eth1--95/dompcont/assocdomp-[uni--l3dom-dpita-2600]' cannot be deleted
removing attentp-__ui_l121_eth1--95/dompcont/assocdomp-[uni--l3dom-dpita-2600]/summary
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Invalid rn string summary under class infra.AssocDomP
Error executing command, check logs for details
removing attentp-__ui_l121_eth1--95/dompcont/summary
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Invalid rn string summary under class infra.ContDomP
Error executing command, check logs for details
removing attentp-__ui_l121_eth1--95/summary
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Invalid rn string summary under class infra.AttEntityP
Error executing command, check logs for details
removing attentp-__ui_l121_eth1--95/nscont
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Mo 'attentp-__ui_l121_eth1--95/nscont' cannot be deleted
removing attentp-__ui_l121_eth1--95/nscont/rstoEncapInstDef-[allocencap-[uni--infra]--encapnsdef-[uni--infra--vlanns-[dpita-vlan-pool]-dynamic]].link
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Mo 'attentp-__ui_l121_eth1--95/nscont/rstoEncapInstDef-[allocencap-[uni--infra]--encapnsdef-[uni--infra--vlanns-[dpita-vlan-pool]-dynamic]].link' cannot be deleted
removing attentp-__ui_l121_eth1--95/nscont/rstoEncapInstDef-[allocencap-[uni--infra]--encapnsdef-[uni--infra--vlanns-[dpita-vlan-pool]-dynamic]]
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Mo 'attentp-__ui_l121_eth1--95/nscont/rstoEncapInstDef-[allocencap-[uni--infra]--encapnsdef-[uni--infra--vlanns-[dpita-vlan-pool]-dynamic]]' cannot be deleted
removing attentp-__ui_l121_eth1--95/nscont/rstoEncapInstDef-[allocencap-[uni--infra]--encapnsdef-[uni--infra--vlanns-[dpita-vlan-pool]-dynamic]]/source-[uni--l3dom-dpita-2600]
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Mo 'attentp-__ui_l121_eth1--95/nscont/rstoEncapInstDef-[allocencap-[uni--infra]--encapnsdef-[uni--infra--vlanns-[dpita-vlan-pool]-dynamic]]/source-[uni--l3dom-dpita-2600]' cannot be deleted
removing attentp-__ui_l121_eth1--95/nscont/rstoEncapInstDef-[allocencap-[uni--infra]--encapnsdef-[uni--infra--vlanns-[dpita-vlan-pool]-dynamic]]/source-[uni--l3dom-dpita-2600]/summary
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Invalid rn string summary under class fabric.CreatedBy
Error executing command, check logs for details
removing attentp-__ui_l121_eth1--95/nscont/rstoEncapInstDef-[allocencap-[uni--infra]--encapnsdef-[uni--infra--vlanns-[dpita-vlan-pool]-dynamic]]/summary
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Invalid rn string summary under class infra.RsToEncapInstDef
Error executing command, check logs for details
removing attentp-__ui_l121_eth1--95/nscont/summary
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Invalid rn string summary under class infra.ContNS
Error executing command, check logs for details
removing attentp-__ui_l121_eth1--95/rtattEntP-[uni--infra--funcprof--accportgrp-__ui_l121_eth1--95]
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Mo 'attentp-__ui_l121_eth1--95/rtattEntP-[uni--infra--funcprof--accportgrp-__ui_l121_eth1--95]' cannot be deleted
removing attentp-__ui_l121_eth1--95/rsdomP-[uni--l3dom-dpita-2600].link
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
removing attentp-__ui_l121_eth1--95/rsdomP-[uni--l3dom-dpita-2600]
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
removing attentp-__ui_l121_eth1--95/rsdomP-[uni--l3dom-dpita-2600]/mo
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Invalid rn string mo under class infra.RsDomP
Error executing command, check logs for details
removing attentp-__ui_l121_eth1--95/rsdomP-[uni--l3dom-dpita-2600]/summary
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Invalid rn string summary under class infra.RsDomP
Error executing command, check logs for details
removing hpaths-__ui_l111_eth1--1
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
removing hpaths-__ui_l111_eth1--1/rsHPathAtt-[topology--pod-1--paths-111--pathep-[eth1--1]].link
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
[Errno 2] No such file or directory: 'hpaths-__ui_l111_eth1--1/rsHPathAtt-[topology--pod-1--paths-111--pathep-[eth1--1]].link/mo'
removing hpaths-__ui_l111_eth1--1/rsHPathAtt-[topology--pod-1--paths-111--pathep-[eth1--1]]
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
removing hpaths-__ui_l111_eth1--1/rsHPathAtt-[topology--pod-1--paths-111--pathep-[eth1--1]]/mo
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Invalid rn string mo under class infra.RsHPathAtt
Error executing command, check logs for details
removing hpaths-__ui_l111_eth1--1/rsHPathAtt-[topology--pod-1--paths-111--pathep-[eth1--1]]/summary
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Invalid rn string summary under class infra.RsHPathAtt
Error executing command, check logs for details
removing hpaths-__ui_l111_eth1--1/mo
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Invalid rn string mo under class infra.HPathS
Error executing command, check logs for details
removing hpaths-__ui_l111_eth1--1/summary
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Invalid rn string summary under class infra.HPathS
Error executing command, check logs for details
removing hpaths-__ui_l111_eth1--13
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
removing hpaths-__ui_l111_eth1--13/rsHPathAtt-[topology--pod-1--paths-111--pathep-[eth1--13]].link
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
[Errno 2] No such file or directory: 'hpaths-__ui_l111_eth1--13/rsHPathAtt-[topology--pod-1--paths-111--pathep-[eth1--13]].link/mo'
removing hpaths-__ui_l111_eth1--13/mo
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Invalid rn string mo under class infra.HPathS
Error executing command, check logs for details
removing hpaths-__ui_l111_eth1--13/rsHPathAtt-[topology--pod-1--paths-111--pathep-[eth1--13]]
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
removing hpaths-__ui_l111_eth1--13/rsHPathAtt-[topology--pod-1--paths-111--pathep-[eth1--13]]/mo
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Invalid rn string mo under class infra.RsHPathAtt
Error executing command, check logs for details
removing hpaths-__ui_l111_eth1--13/rsHPathAtt-[topology--pod-1--paths-111--pathep-[eth1--13]]/summary
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Invalid rn string summary under class infra.RsHPathAtt
Error executing command, check logs for details
removing hpaths-__ui_l111_eth1--13/summary
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Invalid rn string summary under class infra.HPathS
Error executing command, check logs for details
removing hpaths-__ui_l121_eth1--95
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
removing hpaths-__ui_l121_eth1--95/rsHPathAtt-[topology--pod-2--paths-121--pathep-[eth1--95]]
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
removing hpaths-__ui_l121_eth1--95/rsHPathAtt-[topology--pod-2--paths-121--pathep-[eth1--95]]/mo
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Invalid rn string mo under class infra.RsHPathAtt
Error executing command, check logs for details
removing hpaths-__ui_l121_eth1--95/rsHPathAtt-[topology--pod-2--paths-121--pathep-[eth1--95]]/summary
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Invalid rn string summary under class infra.RsHPathAtt
Error executing command, check logs for details
removing hpaths-__ui_l121_eth1--95/mo
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Invalid rn string mo under class infra.HPathS
Error executing command, check logs for details
removing hpaths-__ui_l121_eth1--95/summary
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Invalid rn string summary under class infra.HPathS
Error executing command, check logs for details
removing hpaths-__ui_l121_eth1--95/rspathToAccBaseGrp.link
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Invalid rn string rspathToAccBaseGrp.link under class infra.HPathS
Error executing command, check logs for details
removing hpaths-__ui_l121_eth1--95/rspathToAccBaseGrp
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
removing hpaths-__ui_l121_eth1--95/rspathToAccBaseGrp/mo
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Invalid rn string mo under class infra.RsPathToAccBaseGrp
Error executing command, check logs for details
removing hpaths-__ui_l121_eth1--95/rspathToAccBaseGrp/summary
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Invalid rn string summary under class infra.RsPathToAccBaseGrp
Error executing command, check logs for details
removing hpaths-__ui_l121_eth1--95/rsHPathAtt-[topology--pod-2--paths-121--pathep-[eth1--95]].link
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
[Errno 2] No such file or directory: 'hpaths-__ui_l121_eth1--95/rsHPathAtt-[topology--pod-2--paths-121--pathep-[eth1--95]].link/mo'


admin@rtp-f1-p1-apic1:infra> moconfig commit


This command is being deprecated on APIC controller, please use NXOS-style equivalent command
Committing mo 'uni/l3dom-dpita-2600'
Committing mo 'uni/infra/hpaths-__ui_l121_eth1--95'
Committing mo 'uni/infra/hpaths-__ui_l121_eth1--95/rspathToAccBaseGrp'
Committing mo 'uni/infra/hpaths-__ui_l121_eth1--95/rsHPathAtt-[topology/pod-2/paths-121/pathep-[eth1/95]]'
Committing mo 'uni/infra/hpaths-__ui_l111_eth1--13'
Committing mo 'uni/infra/hpaths-__ui_l111_eth1--13/rsHPathAtt-[topology/pod-1/paths-111/pathep-[eth1/13]]'
Committing mo 'uni/infra/hpaths-__ui_l111_eth1--1'
Committing mo 'uni/infra/hpaths-__ui_l111_eth1--1/rsHPathAtt-[topology/pod-1/paths-111/pathep-[eth1/1]]'
Committing mo 'uni/infra/attentp-__ui_l121_eth1--95'
Committing mo 'uni/infra/attentp-__ui_l121_eth1--95/rsdomP-[uni/l3dom-dpita-2600]'

All mos committed successfully.
admin@rtp-f1-p1-apic1:infra> find *__ui*
find: `*__ui*': No such file or directory
admin@rtp-f1-p1-apic1:infra> pwd
/mit/uni/infra
admin@rtp-f1-p1-apic1:infra> 

 

Per Port VLAN

Introduction

Per Port VLAN is a feature that allows ACI to reuse the same VLAN encap even on the same switch and same tenant! This feature is very useful for multi-tenancy situation where two Tenants need to trunk the same VLAN on an interface.

By default, ACI does its EPG classification by encap/vlan. This feature enables ACI to classify based on (port, VLAN)

Prerequisites

Requirements

  • Seperate VLAN pool for VLANs to be duplicated (namespace)
  • EPGs need to have unique BD (can be same VRF or different)
  • Interface Policy Group needs to have L2 Interface policy for Port Local scope

Configure

1. Configure Unique(different) VLAN pools with the same encap inside

Screen Shot 2016-09-06 at 9.03.54 AM.pngScreen Shot 2016-09-06 at 9.04.03 AM.png

2. Enable port local scope for the Interface Policy Group for the existing interface already using the VLAN

Screen Shot 2016-09-06 at 9.31.09 AM.png

3. Under the Tenant, create a new BD and EPG for the VLAN.

Verify

The output below from ELTMC shows two sets of BD/EPG. the FD_VLAN for 373 is shown twice but its important to note the Fabric_encap is unique. This fabric_encap is generated based on the VLAN pool/namespace. Thats why its required to have a unique VLAN pool, so that the fabric_encap VXLAN/vnid is unique.

module-1# show system internal eltmc info vlan brief
VLAN-Info
VlanId  HW_VlanId Type            Access_enc Access_enc Fabric_enc Fabric_enc BDVlan  
                                  Type                 Type                      
==================================================================================
      1        1    BD_CTRL_VLAN    802.1q      4094     VXLAN  16777209       0
      4       13     BD_EXT_VLAN    802.1q        99     VXLAN  15499165       4
      5        2         BD_VLAN   Unknown         0     VXLAN  15761386       5
      8        3         BD_VLAN   Unknown         0     VXLAN  15531930       8
      9       16         FD_VLAN    802.1q      2265     VXLAN      9402       8
     10        4         BD_VLAN   Unknown         0     VXLAN  15105997      10
     11       17         FD_VLAN    802.1q      2261     VXLAN      9398      10
     12        5         BD_VLAN   Unknown         0     VXLAN  16351141      12
     13       18         FD_VLAN    802.1q      2259     VXLAN      9396      12
     14       14     BD_EXT_VLAN    802.1q      2198     VXLAN  15695749      14
     15       19         FD_VLAN    802.1q      2262     VXLAN      9399       8
     16        6         BD_VLAN   Unknown         0     VXLAN  16351138      16
     17       20         FD_VLAN    802.1q      2255     VXLAN      9392      16
     18        7         BD_VLAN   Unknown         0     VXLAN  15925209      18
     19       21         FD_VLAN    802.1q      2260     VXLAN      9397      18
     20        8         BD_VLAN   Unknown         0     VXLAN  16056263      20
     21       22         FD_VLAN    802.1q      2263     VXLAN      9400      20
     22       15     BD_EXT_VLAN    802.1q      2104     VXLAN  14811122      22
     25        9         BD_VLAN   Unknown         0     VXLAN  16056264      25
     26       10         FD_VLAN    802.1q       375     VXLAN      9811      25
     27       23         BD_VLAN   Unknown         0     VXLAN  16416668      27
     28       24         FD_VLAN    802.1q       373     VXLAN      9809      27
     29       11         BD_VLAN   Unknown         0     VXLAN  16121791      29
     30       25         FD_VLAN    802.1q       374     VXLAN      9810      29
     31       12         BD_VLAN   Unknown         0     VXLAN  16187318      31
     32       26         FD_VLAN    802.1q       390     VXLAN      9826      31
     35       31         FD_VLAN    802.1q      1100     VXLAN      8392       5
     42       32         BD_VLAN   Unknown         0     VXLAN  14942179      42
     43       33         FD_VLAN    802.1q      2195     VXLAN      8592      42
     45       34         BD_VLAN   Unknown         0     VXLAN  16416669      45
     46       35         FD_VLAN    802.1q       373     VXLAN     10592      45
module-1#

Note: BD1/EPG1 has encap vlan-373 and uniquely identified in the fabric as BD-16416668/EPG-9809. BD2/EPG2 has encap vlan-373 again BUT the fabric_encap for the BD/EPG is different than before BD-16416668/EPG-10592

Also interesting to note is the HW column. This shows the front panel ASIC VLAN and how it is translated uniquely.

module-1# show system internal eltmc info interface e1/25
            IfInfo: 
           interface:   Ethernet1/25   :::         ifindex:      436305920
                 iod:             54   :::           state:             up
            External:          FALSE

      NorthStar Info:
                 Mod:              0   :::            Port:             25
          port_layer:             L2   :::     fabric_port:              0
           port_mode:          trunk   :::  native_vlan_id:              0
         switchingSt:        enabled   :::           speed:          10000

     Storm Ctrl Info:
                Type:        Percent
            Stm_rate:     100.000000   :::       Stm_burst:     100.000000
      Stm_rate(Mbps):   10000.000000   ::: Stm_burst(Mbps):   10000.000000
      Stm_rate(toks):           6250   ::: Stm_burst(toks):          65535
       Stm_Pol_Apply:              0

xlate_l2_classid_unset:              0
            vlan_bmp:          25-32
      vlan_bmp_count:              8
        acc_vlan_bmp:    373-375,390
  acc_vlan_bmp_count:              4
     scope(0:G, 1:L):              1   :::       class_id::              4
   mac_limit_reached:              0   :::       mac_limit:              0
port_sec_feature_set:              0   ::: mac_limit_action:              0

      NorthStar Info:
          pc_mbr_idx:             11   ::: dest_learn_port:             12
      dest_encap_idx:             56

            BCM Info:

[SDB INFO]:
                 iod:             54
         pc_if_index:              0
        fab_if_index:              0
               sv_if:              0
                 svp:              0
          bcm_l3_eif:              0
       internal_vlan:              0
          encap_vlan:              0
                 mod:              0
                port:             25
         non_byp_mod:              0
        non_byp_port:             25
         ns_lrn_port:             12
           v6_tbl_id:              0
           v4_tbl_id:              0
          router_mac:00.00.00.00.00.00
          unnumbered:              0
        bcm_trunk_id:              0
        tunnel_mp st:     1096941571
           tep_ip st:     1096941571
          ip_if_mode:              0
          bcm_vrf_id:              0
         Overlay idx:              0
            External:          FALSE

FP Entries
    ifp_port_mask_m0:            666
::::
module-1#

With the output above we queried ELTMC again but this time for information on how the interface is programmed. Highlighted we see the scope field is set to local. This allows the front panel ASIC to have extra translations as well as have ACI classify traffic with (vlan, port)

The moquery below for the concrete vlan “vlanCktEp” and filtered by “encap==vlan-373” shows two objects on that particular leaf. Highlighted are the duplicated encap vlan and unique DN and EPG DN as well

fab1-p1-leaf1# moquery -c vlanCktEp -f 'vlan.CktEp.encap=="vlan-373"'
Total Objects shown: 2

# vlan.CktEp
encap                : vlan-373
adminSt              : active
allowUsegUnsupported : 0
childAction          : 
classPrefOperSt      : encap
createTs             : 2016-09-06T08:45:52.000-04:00
ctrl                 : policy-enforced
dn                   : sys/ctx-[vxlan-2326529]/bd-[vxlan-16416668]/vlan-[vlan-373]
enfPref              : hw
epUpSeqNum           : 0
epgDn                : uni/tn-dpita-tenant/ap-dpita-AP/epg-dpita-EPG1
excessiveTcnFlushCnt : 0
fabEncap             : vxlan-9809
fwdCtrl              : mdst-flood
hwId                 : 24
id                   : 28
lcOwn                : local
modTs                : 2016-09-06T08:45:54.308-04:00
mode                 : CE
monPolDn             : uni/tn-common/monepg-default
name                 : dpita-tenant:dpita-AP:dpita-EPG1
operSt               : up
operStQual           : unspecified
operState            : 0
pcTag                : 16391
proxyArpUnsupported  : 0
qosPrio              : unspecified
qosmCfgFailedBmp     : 
qosmCfgFailedTs      : 00:00:00:00.000
qosmCfgState         : 0
rn                   : vlan-[vlan-373]
status               : 
type                 : ckt-vlan
vlanmgrCfgFailedBmp  : 
vlanmgrCfgFailedTs   : 00:00:00:00.000
vlanmgrCfgState      : 0

# vlan.CktEp
encap                : vlan-373
adminSt              : active
allowUsegUnsupported : 0
childAction          : 
classPrefOperSt      : encap
createTs             : 2016-09-06T08:46:18.000-04:00
ctrl                 : policy-enforced
dn                   : sys/ctx-[vxlan-2326529]/bd-[vxlan-16416669]/vlan-[vlan-373]
enfPref              : hw
epUpSeqNum           : 0
epgDn                : uni/tn-dpita-tenant/ap-dpita-AP/epg-test-ppv
excessiveTcnFlushCnt : 0
fabEncap             : vxlan-10592
fwdCtrl              : mdst-flood
hwId                 : 35
id                   : 46
lcOwn                : local
modTs                : 2016-09-06T08:46:19.964-04:00
mode                 : CE
monPolDn             : uni/tn-common/monepg-default
name                 : dpita-tenant:dpita-AP:test-ppv
operSt               : up
operStQual           : unspecified
operState            : 0
pcTag                : 49155
proxyArpUnsupported  : 0
qosPrio              : unspecified
qosmCfgFailedBmp     : 
qosmCfgFailedTs      : 00:00:00:00.000
qosmCfgState         : 0
rn                   : vlan-[vlan-373]
status               : 
type                 : ckt-vlan
vlanmgrCfgFailedBmp  : 
vlanmgrCfgFailedTs   : 00:00:00:00.000
vlanmgrCfgState      : 0

fab1-p1-leaf1#

 

 

ACI FCoE NPV

Introduction

Congo, or ACI 2.0 will support FCoE NPV on a single leaf. Seems to be only on new  hardware such as the 93180YC-EX

 

 

Prerequisites

 

Requirements

  • FCF – Fiber Channel Forwarder. Since ACI is NPV mode, some other storage switch needs to be able to apply zoning (storage world ACLs) this can be an MDS or a N5K with the right licensce
    • Notice feature FCOE is enabled and feature NPIV is enabled.
    • fcoe-npv converts the switch to an NPV switch, which is not what is needed for connection to ACI.
    • ACI is the NPV switch, which needs to connect to an NPIV switch  (FCF switch)

 

n5k-1# show feature
Feature Name          Instance  State   
--------------------  --------  --------

fcoe                  1         enabled 
fcoe-npv              1         disabled
fcsp                  1         disabled
fex                   1         disabled
fport-channel-trunk   1         disabled
interface-vlan        1         disabled
lacp                  1         enabled 
ldap                  1         disabled
lldp                  1         enabled 
msdp                  1         disabled
npiv                  1         enabled 
npv                   1         disabled

n5k-1# show lic usage 
Feature                      Ins  Lic   Status Expiry Date Comments
                                 Count
--------------------------------------------------------------------------------
FCOE_NPV_PKG                  Yes   -   Unused Never       -
FM_SERVER_PKG                 No    -   Unused             -
ENTERPRISE_PKG                Yes   -   Unused Never       -
FC_FEATURES_PKG               Yes   -   In use Never       -
VMFEX_FEATURE_PKG             No    -   Unused             -
ENHANCED_LAYER2_PKG           No    -   Unused             -
LAN_BASE_SERVICES_PKG         Yes   -   In use Never       -
LAN_ENTERPRISE_SERVICES_PKG   Yes   -   Unused Never       -
--------------------------------------------------------------------------------
n5k-1#

 

  • A host with a CNA
  • F ports connect to devices that are going to log in, F ports expect logins. Usually for hosts with a CNA that will be logging into to the Fiber Channel network
  • NP ports are ports that do the logging in. A CNA on a host is an N port. NP stands for Node Proxy, that is, it itself performs a loging to the upstream NPIV switch. Since it is a proxy, it also forwards logs from actual N-ports.
  • The FCF device connection to the ACI leaf NP port should be an F port. F ports expect logins.

 

Configure

 

Network Diagram

 

Screen Shot 2016-07-03 at 9.26.29 AM

 

Configurations

 

There are some new access policies related to FCoE configuration on the ACI fabric. They are:

  • Fiber Channel Domains – Like any other domain, ties to AAEPs. This Domain takes a VSAN pool, a VLAN pool, and a VSAN attribute.
  • VSAN Pool – Static pool for a range of VSANs. Ties to a Domain
  • VSAN Attributes – Takes a VLAN encap and ties it to a VSAN encap. Equivalent to NXOS command “vlan x, fcoe vsan x”, also has the option to change the LB option.
  • Priority Flow Control Policy – IPG option to turn PFC to Auto|Off|On
  • Slow Drain Policy – Specifies how to handle FCoE packets causing congestion
  • Fiber Channel Interface Policy – Specifies the type of FC interface to be configured F|NP
  • Fiber Channel SAN Policy – default policy for the FC Map MAC Address on the NPV leaf.
  • Fiber Channel Node Policy – default policy for the loadbalance options
  • QOS Class Policies – found under Global Policies. Used to set QoS and globally turn on PFC

 

We will now walk through the steps to configure the access policies for the topology above. One port needs to be configured for the FCF the other port for the host/CNA.

 

Starting from the bottom at Domains > Fiber Channel Domain:

  1. Right-Click and create a new Fiber Channel Domain
    1. Name
    2. Create a new VSAN Pool
      1. Name
      2. Encap BlockScreen Shot 2016-06-25 at 8.41.59 AM
    3. Create a new VLAN Pool
      1. Needs to be static
    4. Create a new VSAN Attribute
      1. Name
      2. Click the + to add a new VSAN Attribute Entry
        1. Specify the VSAN Encap
        2. Specify the VLAN Encap
        3. Loadbalancing type (default is src-dst-ox-id).

 

Screen Shot 2016-06-25 at 8.42.11 AM

Screen Shot 2016-06-25 at 8.42.26 AM

 

 

  1. Under Global Policies > QoS Class Policies:
    1. Click Level1
    2. Enable PFC Admin State
    3. Set the CoS to 3
  2. Under Global Policies > AAEP:
    1. Create an AAEP for the Host/CNA
      1. Associate to the domain created
      2. Do not associate to interfaces just yet
    2. Create an AAEP for the FCF
      1. Associate to the domain created
      2. Do not associate to interfaces just yet.
  • Don’t do any EPG Deployment just yet
  1. Create Interface Policy Groups
    1. Create a Leaf Access IPG for the Host/CNA
      1. Create a Priority Flow Control policy and set to “ON”
      2. Create a Fiber Channel Interface Policy
        1. Since this is the Host/CNA we need to configure the Leaf/ NPV switch virtual interface to be an “F” port since it is expecting a flogi from the host
  • Associate the AAEP for the Host/CNA created in step 3a

Screen Shot 2016-06-25 at 8.42.39 AM

 

  1. Create an IPG for FCF
    1. Use the PFC policy created in step 4ai above
    2. Create a new Fiber Channel Interface Policy
      1. Since this is the external interface connecting to the FCF, the ACI Leaf/NPV switch needs to use an NP port in order to log in to the external NPIV F port.
      2. Screen Shot 2016-06-25 at 8.42.46 AM
    3. Create an Interface Profile for the Host/CNA
      1. Associate the IPG created above
    4. Create an Interface Profile for the FCF
      1. Associate the IPG created above

Screen Shot 2016-06-25 at 8.42.56 AM

Screen Shot 2016-06-25 at 8.43.20 AM

 

 

  1. Create a Switch Profile for the Leaf that has the Host and FCF connected.
    1. Associate the interface selector profile for the host/CNA and the FCF

 

That should take care of access policies.

 

1        Tenant Configuration

Moving on to the tenant configuration begin by creating a Bridge Domain:

 

  1. Make sure this bridge domain is set to “Type: FC”
    1. Name
    2. Type
    3. VRF
    4. Next, Next, Finish

Screen Shot 2016-06-25 at 8.43.28 AM

 

  1. Create an EPG
    1. Tie it to the FC-BD
    2. Set the QoS Class to the previously configured class with PFC enabled and CoS3
    3. Screen Shot 2016-06-25 at 8.43.54 AM
  2. Configure EPG
    1. Associate the fiber channel domain
    2. Add a Fiber Channel Path
      1. Two paths are needed, one for the port with the Host/CNA and another for the FCF
      2. Right-click and select “deploy fiber channel”
        1. Specify the Path for the Host/CNA from the drop down.
        2. Enter the VSAN
        3. Select Native

Screen Shot 2016-06-25 at 8.44.14 AMScreen Shot 2016-06-25 at 8.44.26 AMScreen Shot 2016-06-25 at 8.44.35 AM

  • Repeat for the other interface

Screen Shot 2016-06-25 at 8.44.43 AM

 

Caveat: Need a native vlan-1 on all FCoE ports

 

At the time of writing. It is required that the port for the host/CNA have an access VLAN on it. This is required for FIP to take place. Without FIP the vfc toward the host will always be down, VSAN is initializing due to not all VSANs up on trunk, as seen in the verification output above.

 

To remedy this situation, there are two ways to proceed:

  • Create a custom “native” tenant with a BD, VRF, EPG and a static path with vlan-1 as access
  • In the same EPG, create a new BD and EPG with a static path of any vlan as access

Method 1:

 

Access policies need to be modified alittle for this method. A new Physical domain needs to be created as well as a VLAN pool for vlan-1. Finally associate that new domain to the AAEP for the host/CNA created in section 7.

 

From the APIC CLI/NXOS-style CLI copy and paste this configuration to configure a new tenant:

tenant native_tenant

vrf context nativeVrf

exit

bridge-domain nativebd

vrf member nativeVrf

exit

application nativeApp

epg nativeepg

bridge-domain member nativebd

set qos-class level1

exit

exit

interface bridge-domain nativebd

Should look like the below:

 

fab2-p1-apic1# config t

fab2-p1-apic1(config)# tenant native_tenant

fab2-p1-apic1(config-tenant)# vrf context nativeVrf

fab2-p1-apic1(config-tenant-vrf)# exit

fab2-p1-apic1(config-tenant)# bridge-domain nativebd

fab2-p1-apic1(config-tenant-bd)# vrf member nativeVrf

fab2-p1-apic1(config-tenant-bd)# exit

fab2-p1-apic1(config-tenant)# application nativeApp

fab2-p1-apic1(config-tenant-app)# epg nativeepg

fab2-p1-apic1(config-tenant-app-epg)# bridge-domain member nativebd

fab2-p1-apic1(config-tenant-app-epg)# set qos-class level1

fab2-p1-apic1(config-tenant-app-epg)# exit

fab2-p1-apic1(config-tenant-app)# exit

fab2-p1-apic1(config-tenant)# interface bridge-domain nativebd

fab2-p1-apic1(config-tenant-interface)# exit

fab2-p1-apic1(config-tenant)# exit

fab2-p1-apic1(config)#

fab2-p1-apic1(config)#

 

Now, navigate in the GUI to the newly created “native_tenant” > nativeApp > nativeepg Deploy a static path to the interface with the host, for vlan-1 as access/untagged. Associate the domain created above. Another option is to use a static leaf binding to deploy this native vlan on all interfaces of the leaf.

 

At this point the vfc should be up and FIP should have worked

 

 

 

Verification:
fab2-p1-leaf5-EX# show int e1/13 sw
Name: Ethernet1/13
  Switchport: Enabled
  Switchport Monitor: not-a-span-dest
  Operational Mode: trunk
  Access Mode Vlan: 7 (default)
  Trunking Native Mode VLAN: 7 (default)
  Trunking VLANs Allowed: 2,7,17-18
  FabricPath Topology List Allowed: 0
  Administrative private-vlan primary host-association: none
  Administrative private-vlan secondary host-association: none
  Administrative private-vlan primary mapping: none
  Administrative private-vlan secondary mapping: none
  Administrative private-vlan trunk native VLAN: none
  Administrative private-vlan trunk encapsulation: dot1q
  Administrative private-vlan trunk normal VLANs: none
  Administrative private-vlan trunk private VLANs: none
  Operational private-vlan: none
fab2-p1-leaf5-EX# show vlan ex
 
 VLAN Name                             Status    Ports                          
 ---- -------------------------------- --------- -------------------------------
 1    dpita-tenant:dpita-AP:dpita-EPG4 active    Eth1/42         
 2    native_tenant:nativebd           active    Eth1/13         
 7    native_tenant:nativeApp:nativeep active    Eth1/13         
      g                                                           
 17   dpita-tenant:dpita-fcoe-bd       active    Eth1/13, Eth1/25
 18   dpita-tenant:dpita-AP:dpita-fcoe active    Eth1/13, Eth1/25
 21   mgmt:inb                         active    --              
 24   dpita-tenant:dpita-BD2           active    Eth1/42         
 25   dpita-tenant:dpita-AP:dpita-EPG2 active    Eth1/42         
 26   dpita-tenant:dpita-BD1           active    Eth1/42         
 27   dpita-tenant:dpita-AP:dpita-EPG1 active    Eth1/42         
 28   dpita-tenant:dpita-BD3           active    Eth1/42
 29   dpita-tenant:dpita-AP:dpita-EPG3 active    Eth1/42
 30   dpita-tenant:dpita-BD4           active    Eth1/42
 
 VLAN Type  Vlan-mode  Encap                                                        
 ---- ----- ---------- -------------------------------                              
 1    enet  CE         vlan-391                                                     
 2    enet  CE         vxlan-16154555                                               
 7    enet  CE         vlan-1                                                       
 17   enet  CE         vxlan-16547725                                               
 18   enet  CE         vlan-406                                                      
 21   enet  CE         vxlan-15728622                                               
 24   enet  CE         vxlan-16089028                                               
 25   enet  CE         vlan-356                                                      
 26   enet  CE         vxlan-16646016                                               
 27   enet  CE         vlan-390                                                     
 28   enet  CE         vxlan-16416667                                                
 29   enet  CE         vlan-373                                                     
 30   enet  CE         vxlan-16252848                                               
fab2-p1-leaf5-EX#
 
 
fab2-p1-leaf5-EX# show int vfc 1/13
Vfc1/13 is trunking
    Bound interface is Ethernet1/13
    Hardware is Ethernet
    Port WWN is 20:0D:E0:0E:DA:A2:F2:CB
    Admin port mode is F, trunk mode is on
    Port mode is TF
    Port vsan is 406
    Speed is auto
    Trunk vsans (admin allowed and active) (406)
    Trunk vsans (up)                       (406)
    Trunk vsans (isolated)                 ()
    Trunk vsans (initializing)             ()
    205 fcoe in packets
    81592 fcoe in octets
    861 fcoe out packets
    1283264 fcoe out octets

 

 

 

 

Method 2:

 

This method still requires some modification of the access policies but will allow an access/untagged VLAN to be deployed as well as allow the server data traffic to be inside the user Tenant, assuming the server is sending untagged traffic.

 

Under the access policies, use an existing VLAN pool and tie it to a physical domain. Tie this physical domain once again to the AAEP for the host/CNA.

 

Under the user tenant, create a new BD of type regular, create an SVI if desired. Create a new EPG, associate the physical domain and the new static path as access/untagged.

 

At this point the vfc should be up and FIP should have worked

 

 

 

Verification:
fab2-p1-leaf5-EX# show int e1/13 sw
Name: Ethernet1/13
  Switchport: Enabled
  Switchport Monitor: not-a-span-dest
  Operational Mode: trunk
  Access Mode Vlan: 1 (default)
  Trunking Native Mode VLAN: 1 (default)
  Trunking VLANs Allowed: 1,17-18,30
  FabricPath Topology List Allowed: 0
  Administrative private-vlan primary host-association: none
  Administrative private-vlan secondary host-association: none
  Administrative private-vlan primary mapping: none
  Administrative private-vlan secondary mapping: none
  Administrative private-vlan trunk native VLAN: none
  Administrative private-vlan trunk encapsulation: dot1q
  Administrative private-vlan trunk normal VLANs: none
  Administrative private-vlan trunk private VLANs: none
  Operational private-vlan: none
fab2-p1-leaf5-EX# show vlan ex
 
 VLAN Name                             Status    Ports                          
 ---- -------------------------------- --------- -------------------------------
 1    dpita-tenant:dpita-AP:dpita-     active    Eth1/13         
      fcoe-data                                                  
 17   dpita-tenant:dpita-fcoe-bd       active    Eth1/13, Eth1/25
 18   dpita-tenant:dpita-AP:dpita-fcoe active    Eth1/13, Eth1/25
 21   mgmt:inb                         active    --              
 22   dpita-tenant:dpita-BD4           active    Eth1/42          
 23   dpita-tenant:dpita-AP:dpita-EPG4 active    Eth1/42         
 24   dpita-tenant:dpita-BD1           active    Eth1/42         
 25   dpita-tenant:dpita-AP:dpita-EPG1 active    Eth1/42         
 26   dpita-tenant:dpita-BD3           active    Eth1/42         
 27   dpita-tenant:dpita-AP:dpita-EPG3 active    Eth1/42         
 28   dpita-tenant:dpita-BD2           active    Eth1/42
 29   dpita-tenant:dpita-AP:dpita-EPG2 active    Eth1/42
 30   dpita-tenant:dpita-fcoe-data-bd  active    Eth1/13
 
 VLAN Type  Vlan-mode  Encap                                                        
 ---- ----- ---------- -------------------------------                              
 1    enet  CE         vlan-405                                                      
 17   enet  CE         vxlan-16547725                                               
 18   enet  CE         vlan-406                                                     
 21   enet  CE         vxlan-15728622                                                
 22   enet  CE         vxlan-16252848                                               
 23   enet  CE         vlan-391                                                     
 24   enet  CE         vxlan-16646016                                                
 25   enet  CE         vlan-390                                                     
 26   enet  CE         vxlan-16416667                                               
 27   enet  CE         vlan-373                                                      
 28   enet  CE         vxlan-16089028                                               
 29   enet  CE         vlan-356                                                     
 30   enet  CE         vxlan-16678779                                                
fab2-p1-leaf5-EX#
fab2-p1-leaf5-EX# show int vfc 1/13
Vfc1/13 is trunking
    Bound interface is Ethernet1/13
    Hardware is Ethernet
    Port WWN is 20:0D:E0:0E:DA:A2:F2:CB
    Admin port mode is F, trunk mode is on
    Port mode is TF
    Port vsan is 406
    Speed is auto
    Trunk vsans (admin allowed and active) (406)
    Trunk vsans (up)                       (406)
    Trunk vsans (isolated)                 ()
    Trunk vsans (initializing)             ()
    205 fcoe in packets
    81592 fcoe in octets
    861 fcoe out packets
    1283264 fcoe out octets

 

 

Verify

 

 

 

fab2-p1-leaf5-EX# show vsan membership
vsan 406 interfaces
    vfc1/13            vfc1/25
fab2-p1-leaf5-EX# show int vfc1/25
Vfc1/25 is trunking
    Bound interface is Ethernet1/25
    Hardware is Ethernet
    Port WWN is 20:19:E0:0E:DA:A2:F2:CB
    Admin port mode is NP, trunk mode is on
    Port mode is TNP
    Port vsan is 406
    Speed is auto
    Trunk vsans (admin allowed and active) (406)
    Trunk vsans (up)                       (406)
    Trunk vsans (isolated)                 ()
    Trunk vsans (initializing)             ()
    6 fcoe in packets
    804 fcoe in octets
    0 fcoe out packets
    0 fcoe out octets
    Interface last changed at 2016-06-03T16:55:17.330+03:00
fab2-p1-leaf5-EX#
 
fab2-p1-leaf5-EX# show npv status
 
npiv is enabled
 
disruptive load balancing is disabled
 
External Interfaces:
====================
  Interface: vfc1/25, State: up
        VSAN:  406, State: up , FCID: E8:00:20
 
  Number of External Interfaces: 1
 
Server Interfaces:
==================
  Interface: vfc1/13, VSAN:  406, State: wait-flogi
 
  Number of Server Interfaces: 1
========================
fab2-p1-leaf5-EX#
fab2-p1-leaf5-EX#
fab2-p1-leaf5-EX#
fab2-p1-leaf5-EX# show fcoe database
-------------------------------------------------------------------------------
INTERFACE       FCID            PORT NAME               MAC ADDRESS
-------------------------------------------------------------------------------
vfc1/25         E8:00:20        20:7C:8C:60:4F:3A:32:BF 8C:60:4F:3A:32:A0
 
Total number of flogi count from FCoE devices = 1.
fab2-p1-leaf5-EX# show vlan ex
 
 VLAN Name                             Status    Ports                          
 ---- -------------------------------- --------- -------------------------------
 10   dpita-tenant:dpita-fcoe-bd       active    Eth1/13, Eth1/25
 11   dpita-tenant:dpita-didata-       active    Eth1/13, Eth1/25
      recreate:epg2                                              
 18   mgmt:inb                         active    --              
 20   dpita-tenant:dpita-BD3           active    Eth1/42         
 21   dpita-tenant:dpita-AP:dpita-EPG3 active    Eth1/42         
 22   dpita-tenant:dpita-BD4           active    Eth1/42         
 23   dpita-tenant:dpita-AP:dpita-EPG4 active    Eth1/42         
 24   dpita-tenant:dpita-BD1           active    Eth1/42         
 25   dpita-tenant:dpita-AP:dpita-EPG1 active    Eth1/42         
 26   dpita-tenant:dpita-BD2           active    Eth1/42         
 28   dpita-tenant:dpita-AP:dpita-EPG2 active    Eth1/42
 
 VLAN Type  Vlan-mode  Encap                                                        
 ---- ----- ---------- -------------------------------                              
 10   enet  CE         vxlan-16547725                                               
 11   enet  CE         vlan-406                                                     
 18   enet  CE         vxlan-15728622                                               
 20   enet  CE         vxlan-16416667                                               
 21   enet  CE         vlan-373                                                     
 22   enet  CE         vxlan-16252848                                               
 23   enet  CE         vlan-391                                                      
 24   enet  CE         vxlan-16646016                                               
 25   enet  CE         vlan-390                                                     
 26   enet  CE         vxlan-16089028                                                
 28   enet  CE         vlan-356
 

 

 

Troubleshoot

 

Helpful commands:

 

  • show vlan ex
    • Will show the VLAN and VSANs deployed on a switch
  • show int e1/x switchport
    • Will show the VLANs/VSANs deployed on an interface
  • show fcoe database
    • NP port logins
  • show npv status
    • Displays information on NPV and the Externa/Server facing interfaces and VSANs
  • show int vfc
    • Displays information on the particular virtual fiber channel interface. Status, Binding, VSAN, and port mode
  • show vsan membership
    • Displays all VSANs and the VFCs that are members of each VSAN
  • show vlan fcoe
    • Displays binding of VLAN to BD to VSAN and its current state
  • show lldp interface e1/x
    • Shows DCBX information
  • show npv flogi-table
    • Shows logins from Server CNAs on the VFC F-port

FIP:

  • Uses ethertype 0x8914. Goes directly to the CPU
  • FCoE switches uses ethertype 0x8906

to troubleshoot FIP use the following command:

 

tcpdump -xxi tahoe0 > /tmp/inband.log

 

 

 

tcpdump -i kpm_inb | grep 8914

 

15:08:06.915374 0e:fc:00:e8:00:40 (oui Unknown) > 8c:60:4f:3a:32:b8 (oui Unknown), ethertype Unknown (0x8914), length 56: 
15:08:06.928086 00:22:bd:d6:57:e5 (oui Unknown) > 01:10:18:01:00:02 (oui Unknown), ethertype Unknown (0x8914), length 56: 
15:08:06.949260 00:22:bd:d6:57:e5 (oui Unknown) > 01:10:18:01:00:02 (oui Unknown), ethertype Unknown (0x8914), length 56: 
15:08:06.961130 00:22:bd:d6:57:e5 (oui Unknown) > 01:10:18:01:00:02 (oui Unknown), ethertype Unknown (0x8914), length 56: 
15:08:06.961990 00:22:bd:d6:57:e5 (oui Unknown) > 01:10:18:01:00:02 (oui Unknown), ethertype Unknown (0x8914), length 56: 
15:08:06.962840 8c:60:4f:3a:32:b8 (oui Unknown) > 00:22:bd:d6:57:e5 (oui Unknown), ethertype Unknown (0x8914), length 2172: 

 

 

FCoE internal commands (From NPI slides):


Run these commands from vsh

 

FC PM internal commands

  • sh port internal event-history msg
  • sh port internal event-history interface vfcx/y
  • sh port internal event-history errors

VSAN Manager Internal commands

  • Show vsan internal event-history errors
  • Show vsan internal event-history msgs

FcFwd Commands

  • sh system internal fcfwd bdportmap entries (bd-fd-encap vlan table)
  • sh system internal fcfwd mpmap vfcs
  • sh system internal fcfwd pcmap (eth shadow PC)

FCoE and VFC show internal commands

  • show system internal fcoe_mgr event-history errors
  • show system internal fcoe_mgr event-history msgs
  • show system internal fcoe_mgr info global
  • show system internal fcoe_mgr mem-stats detail
  • show system internal fcoe_mgr event-history interface vfc <vfc-id>
  • show system internal fcfwd mpmap vfcs

NPV Internal commands

  • show npv internal event-history distrib-fsm
  • show npv internal event-history ext-if-fsm interface <if> vsan <vsan>
  • show npv internal event-history ext-if-fsm interface <if>
  • show npv internal event-history ext-if-fsm vsan <vsan>
  • show npv internal event-history ext-if-fsm
  • show npv internal event-history flogi-fsm interface <if>
  • show npv internal event-history flogi-fsm pwwn <wwn>
  • show npv internal event-history flogi-fsm vsan <vsan>
  • show npv internal event-history flogi-fsm
  • show npv internal event-history svr-if-fsm interface <if>
  • show npv internal event-history svr-if-fsm vsan <vsan>
  • show npv internal event-history svr-if-fsm
  • show npv internal event-history errors
  • show npv internal event-history events
  • show npv internal event-history msgs

 

 

FCoE Logs:

  • /tmp/logs/fcoe_mgr_trace.txt
  • /tmp/logs/fcnportmux_debug_xx
  • /tmp/logs/port_mgr.log

 

 

Appendix

 

Native-Tenant Config

 

The following configuration can be copy/pasted into the APIC NXOS-CLI. The next step would to add static paths for the interfaces connected to CNAs.

 

config t

tenant native_tenant

vrf context nativeVrf

exit

bridge-domain nativebd

vrf member nativeVrf

exit

application nativeApp

epg nativeepg

bridge-domain member nativebd

set qos-class level1

exit

exit

interface bridge-domain nativebd

exit

exit

 

 

Scale Numbers

 

  • Total VSAN per Leaf: 32
  • Total VFC per Leaf: 48
  • Total FDISC per port: 255
  • Total FDISC per SB Leaf: 512

 

N5K Configuration

!Command: show running-config

!Time: Fri Jun 18 10:49:26 2010

 

version 5.2(1)N1(6)

feature fcoe

hostname n5k-1

feature npiv

feature telnet

cfs ipv4 distribute

feature lldp

 

banner motd #Nexus 5000 Switch

#

 

ip domain-lookup

system qos

service-policy type queuing input fcoe-default-in-policy

service-policy type queuing output fcoe-default-out-policy

service-policy type qos input fcoe-default-in-policy

service-policy type network-qos fcoe-default-nq-policy

snmp-server user admin network-admin auth md5 0xc4ff8bf3d16d4eeac2958943bd7b36e7

localizedkey

vrf context management

ip route 0.0.0.0/0 172.18.217.1

vlan 1

vlan 406

fcoe vsan 406

port-profile default max-ports 512

vsan database

vsan 406

device-alias mode enhanced

device-alias database

device-alias name c210-h4-a pwwn 20:00:00:22:bd:d6:57:e5

device-alias name c210-h4-b pwwn 20:00:00:22:bd:d6:57:e6

device-alias name fas3040-a pwwn 50:0a:09:81:87:59:70:17

device-alias name fas3040-b pwwn 50:0a:09:82:87:59:70:17

 

device-alias commit

 

fcdomain fcid database

vsan 406 wwn 50:0a:09:81:87:59:70:17 fcid 0xe80000 dynamic

!              [fas3040-a]

vsan 406 wwn 20:19:e0:0e:da:a2:f2:cb fcid 0xe80020 dynamic

vsan 406 wwn 20:00:00:22:bd:d6:57:e5 fcid 0xe80040 dynamic

!              [c210-h4-a]

 

 

interface vfc125

bind interface Ethernet1/25

switchport trunk allowed vsan 406

no shutdown

 

interface vfc131

bind interface Ethernet1/31

switchport trunk allowed vsan 406

no shutdown

vsan database

vsan 406 interface vfc125

vsan 406 interface vfc131

 

 

interface Ethernet1/25

switchport mode trunk

switchport trunk allowed vlan 406

 

interface Ethernet1/26

 

interface Ethernet1/27

 

interface Ethernet1/28

 

interface Ethernet1/29

 

interface Ethernet1/30

 

interface Ethernet1/31

switchport mode trunk

switchport trunk allowed vlan 406

 

line console

line vty

boot kickstart bootflash:/n5000-uk9-kickstart.5.2.1.N1.6.bin

boot system bootflash:/n5000-uk9.5.2.1.N1.6.bin

zone mode enhanced vsan 406

!Active Zone Database Section for vsan 406

zone name h4-netapp vsan 406

member device-alias c210-h4-a

member device-alias c210-h4-b

member device-alias fas3040-a

member device-alias fas3040-b

 

zoneset name vsan406-dpita vsan 406

member h4-netapp

 

zoneset activate name vsan406-dpita vsan 406

do clear zone database vsan 406

!Full Zone Database Section for vsan 406

zone name h4-netapp vsan 406

member device-alias c210-h4-a

member device-alias c210-h4-b

member device-alias fas3040-a

member device-alias fas3040-b

 

zoneset name vsan406-dpita vsan 406

member h4-netapp

 

zone commit vsan 406

 

 

ACI 2.0!

Hello everyone!

ACI 2.0 has been release! its 2.0(1m) and 12.1m! here is a list of cool new features!

taken right from the release notes, found here:

ACI 2.0(1m) release notes

Table 3 New Software Features, Guidelines, and Restrictions

Feature

Description

Guidelines and Restrictions

ACI vCenter Plugin for VMware vSphere Web Client

The Cisco ACI vCenter plugin is a user interface that allows you to manage the ACI fabric from within the vSphere Web client. Only Cisco ACI vCenter plugin 5.0 and later is supported.

For more information, see the Cisco ACI Virtualization Guide.

None.

AVS Health Status

The Cisco ACI reports errors that occur on nodes in the fabric to the Cisco APIC as an aid to troubleshooting. Cisco AVS faults are now reported as well as faults for leaf and spine switches in the ACI fabric.

None.

BGP Limit on the            Maximum Autonomous System Numbers

A control knob was added to the BGP timers policy that discards BGP routes that have a number of autonomous system path segments that exceed the specified limit.

None.

Contract Permit Logging

You can enable and view contract Layer 2 and Layer 3 permit log data to troubleshoot packets and flows that were allowed to be sent through contract permit rules. You can also enable and view taboo contract Layer 2 and Layer 3 logs for packets and flows that were dropped due to taboo contract deny rules.

This feature is supported only on 93xx-EX switches.

COOP Authentication

COOP data path communication provides high priority transport using secured connections. COOP is enhanced to leverage the MD5 option to protect COOP messages from malicious traffic injection. The APIC controller and switches support COOP protocol authentication.

None.

Copy Services

Unlike SPAN that duplicates all the traffic, the Cisco Application Centric Infrastructure (ACI) contract copy feature enables selectively copying portions of the traffic between endpoint groups, according to the specifications of the contract. Broadcast, unknown unicast and multicast (BUM), and control plan traffic that are not covered by the contract are not copied. SPAN copies everything out of endpoint groups, access ports or uplink ports. Unlike SPAN, copy contracts do not add headers to the copied traffic. Copy contract traffic is managed internally in the switch to minimize impact on normal traffic forwarding.

For more information, see the Cisco APIC Layer 4 to Layer 7 Services Deployment Guide.

This feature is supported only on 93xx-EX switches.

Difference Between Local Time and Unified Cluster Time

This value is the calculated time difference, in milliseconds, between local time and unified cluster time.

Unified cluster time is an internal time that is used to time stamp changes within the cluster fabric. Unified cluster time is synchronized internally and cannot be changed by the user, and is used to identify the sequence of changes across different cluster nodes. Unified cluster time can be significantly different than the system time. The difference between local time and unified cluster time can be either a negative or positive value, which indicates whether the local time is ahead of or behind the unified cluster time.

None.

Digital Optical Monitoring

In this release, you can enable and view digital optical monitoring (DOM) statistics to troubleshoot physical optical interfaces (on transceivers) for both leaf and spine nodes. The statistics include the number of alerts, Tx fault count, and Rx loss count, as well as the value and thresholds for temperature, voltage, electrical current, optical Tx power, and optical Rx power for the interface.

None.

Distributed Firewall Permit Logging

Cisco AVS now reports the flows that are permitted by Distributed Firewall to the system log (syslog) as well as flows that are denied. You can configure parameters for the flows in the CLI or REST API to assist with auditing network security.

None.

EPG Delimiter

When creating a vCenter domain or SCVMM domain, you  can now specify a delimiter to use with the VMware port group name.

For more information, see the Cisco ACI Virtualization Guide.

None.

EPG Deployment Through AEP

Attached entity profiles can be associated directly with application EPGs, which deploys the associated application EPGs to all of the ports that are associated with the attached entity profile.

None.

FCoE N-Port Virtualization support

ACI 2.0(1) supports Fibre Channel over Ethernet (FCoE) traffic through direct connections between hosts and F port-enabled interfaces and direct connections between the FCF device and an NP port-enabled interface on ACI leaf switches.

This feature is supported only on 93xx-EX switches.

FCoE host-to-F port or FEX-to-NP port connections through intervening FEX devices are not supported.

Static endpoints for an FCoE end host are not supported.

IGMP Snoop Policy Disable

The IGMP snoop policy now supports the adminSt parameter, which can be used to disable IGMP snooping on ACI.

None.

Layer 3 EVPN Services Over Fabric WAN

The Layer 3 EVPN services over fabric WAN feature enables much more efficient and scalable ACI fabric WAN connectivity. It uses the BGP EVPN protocol over OSPF for WAN routers that are connected to spine switches.

You cannot use this feature with the multipod feature.

Only a single Layer 3 EVPN Services Over Fabric WAN provider policy can be deployed on spine switch interfaces for the whole fabric.

Layer 3 Multicast

Cisco APIC supports the Layer 3 multicast feature with multicast routing enabled using the Protocol Independent Multicast (PIM) protocol. Layer 3 multicast supports Any Source Multicast (ASM) and Source-Specific Multicast (SSM).

This feature is supported only on 93xx-EX switches.

Multipod Support

Multipod enables provisioning a more fault tolerant fabric comprised of multiple pods with isolated control plane protocols. Also, multipod provides more flexibility with regard to the full mesh cabling between leaf and spine switches. For example, if leaf switches are spread across different floors or different buildings, multipod enables provisioning multiple pods per floor or building and providing connectivity between pods through spine switches.

You cannot use this feature with the Layer 3 EVPN services over fabric WAN feature.

OSPF Inbound Route Controls

Support is added for inbound route controls in Layer 3 Outside tenant networks, using OSPF. This includes aggregate import route controls using OSPF.

None.

Policy-Based Routing

Cisco Application Centric Infrastructure (ACI) policy-based routing (PBR) enables provisioning service appliances such as firewalls or load balancers as managed or unmanaged nodes without needing a Layer 4 to Layer 7 package. Typical use cases include provisioning service appliances that can be pooled, tailored to application profiles, scaled easily, and have reduced exposure to service outages. PBR simplifies the deployment of service appliances by enabling the provisioning consumer and provider endpoint groups all to be in the same VRF instance.

For more information, see the Cisco APIC Layer 4 to Layer 7 Services Deployment Guide.

None.

Port Security

The port security feature protects the ACI fabric from being flooded with unknown MAC addresses by limiting the number of MAC addresses learned per port. This feature support is available for physical ports, port channels, and virtual port channels.

This feature is supported only on 93xx-EX switches.

Support for Multiple vCenters per Fabric

You can now have 50 vCenters per fabric.

None.

VMware vRealize Integration Enhancements

vRealize 7.0 and the vCenter plugin are now supported.

The following blueprints are now supported:

·         Generate and Add Certificate to APIC

·         Add FW to Tenant Network – VPC Plan

For more information, see the Cisco ACI Virtualization Guide.

None.

vRealize Support for AVS

Cisco AVS is now supported in VMware’s products vRealize Automation (vRA) and vRealize Orchestrator (vRO), parts of the VMware vRealize Suite for building and managing multivendor hybrid cloud environments.

None.