Cisco APIC 4.1(1i) released

Release 4.1(1i) became available on March 28, 2019.

14 min read
By prox
Cisco APIC 4.1(1i) released

New software features: Infrastructure

  • APIC now supports the BGP multicast v4 address family.
  • This release adds the EPG Communication tab. This tab enables you to create communication between two EPGs and to monitor which EPGs are communicating with one other through a contract and filters. Using this tab represents a simpler, faster way to set up a contract between the EPGs.
  • This release enhances FC NPV to support:
    • Having an FCoE host that uses FEX over an FC NPV link
    • 32G Brocade interoperability
  • Support is now available for configuring filter groups, with flow entries that are used to filter the traffic, and associating them to SPAN source groups. Details could be found in Cisco APIC Troubleshooting Guide, Release 4.1(x).
  • IP SLA: Internet protocol service level agreement (IP SLA) tracking is a common requirement in networks that allows a network administrator to collect information about network performance in real-time. With Cisco ACI IP SLA, you can track an IP address using ICMP and TCP probes. Tracking configurations can influence route tables, allowing for routes to be removed when tracking results come in negative and returning the route to the table when the results become positive again. For more information, see the Cisco APIC Layer 3 Networking Configuration Guide, Release 4.1(x), chapter 20.
  • Layer 1/Layer 2 policy-based redirect: This feature allows you to configure policy-based redirect on Layer 1 or Layer 2 service devices. For more information, see the Cisco APIC Layer 4 to Layer 7 Services Deployment Guide, Release 4.1(x), chapter 10.
    • Active-active deployment is not supported.
    • The two legs of the Layer 2 service device need to be configured on a different leaf switch to avoid packet loops. Per port VLAN is not supported.
    • Shared bridge domain is not supported. A Layer 1/ Layer 2 device bridge domain cannot be shared with Layer 3 device or regular EPGs.
    • Service node in managed mode is not supported.
    • Layer 1/Layer 2 devices support physical domain only, VMM domain is not supported.
  • Support is now available for local SPAN with port-channels as the destination. For more information, see the Cisco APIC Troubleshooting Guide, Release 4.1(x), chapter 6.
    • Sources and the port-channel must be local on the same switch.
  • You can now use mini ACI fabric with ACI Multi-Site topology on a single pod.
  • Support is now available for Multicast Listener Discovery (MLD) snooping. For more information, see the Cisco APIC Layer 3 Networking Configuration Guide, Release 4.1(x), chapter 23.
  • You can create a multi-tier ACI fabric topology that corresponds to a Core-Aggregation-Access architecture found in many existing data centers. While providing all of the benefits of the ACI fabric, the multi-tier architecture enhancement also mitigates the need to upgrade costly components such as rack space or cabling. The addition of a tier-2 leaf layer makes this topology possible. The tier-2 leaf layer supports connectivity to hosts or servers on the downlink ports and connectivity to the leaf layer (aggregation) on the uplink ports.
  • The SSD Monitoring feature enables you to override the preconfigured thresholds for the SSD lifetime parameters and raise faults when the SSD reaches some percentage of the configured thresholds. These faults allows network operators the capability to monitor and proactively replace any switch before the switch fails due to an SSD's lifetime parameter values becoming exceeded.
    • This feature requires Micron M600 64 gb SSDs.
    • You cannot configure this feature using the CLI.
  • Virtual Port Channel migration: This feature allows the migration of nodes from non-EX, non-FX, and non-FX2 switch to an EX, FX, or FX2 switch.

New software features: Fabric Scale and other enhancements

  • You can now bookmark almost any page, which enables you to go back to that page easily by choosing the bookmark from your list of bookmarks. In previous releases, this feature was represented as favorites (the star icon), and had less capability.
  • Some of the wizards now include a confirmation screen and summary screen as the last steps. On the confirmation screen, you see a list of the policies that the wizard will create. You can change the names of the policies, if necessary. After the confirmation screen is the summary screen, which shows you the policies that the wizard created. You can no longer change the policies' names, but you can edit the properties of a policy.
  • Default tab: This feature enables you to set a tab as the "favorite" on a page. Whenever you navigate to that page, that tab will be the default tab that is displayed. This feature is enabled only for the tabs in the Work pane.
  • Physical interface configuration now includes error counter statistics information.
  • Export tech support configuration data: This enhancement allows the user to export tech support data or configurations with read-only privileges.
  • GTP load balancing: This feature enables the Cisco APIC to perform fabric load balancing based on GTP TEID.
  • Leaf switch uplink ports priority: When the fabric is scaled with numerous bridge domains, endpoint groups, and so on, and each are allocated a VLAN, this causes VLAN resource contention. Reloading a leaf switch in this state causes the leaf-to-spine switch uplink to enter the disabled state (those links do not come up). In this release, the leaf-to-spine switch uplinks are given a higher priority with the VLAN resource that is allocated to them, so that reloading a leaf switch while the switch is in a VLAN resource contention state does not affect the leaf-to-spine switch uplinks (the links come up).
  • You can now run an app in multiple GUI screens, or "contexts." For example, you can run the app while looking at a tenant's application profiles and while looking at the tenant's contracts. Prior to the 4.1 release, you could run an app only in one context; switching to a different context would close the app.
  • New alerts: This release adds the following alerts
    • Leaf x is Inactive: This alert warns you that a leaf switch became inactive, powered down, or disconnected.
    • New Switch Discovered: This informational alert informs you when a new switch is discovered.
    • Node Outage: Indicates that a node is either down or reloading.
    • Node x Must Be Reloaded: This alert warns you that an SSD must be reformatted and repartitioned.
    • OSPF Connectivity is Down: This alert warns you when OSPF connectivity is down. The alert lists the interfaces that have OSPF configured, but are not able to communicate with one another, and provides a recommended troubleshooting action.
    • Process Crash: This alert warns you that a process has crashed.
    • Split-Fabric Detected: Indicates that the fabric is split and that the controller is operating in read-only mode.
  • Scale changes: This release includes the following scale changes
    • Maximum number of remote leaf switches: 128 (single pod)
    • 100 sub-interfaces per VRF and per L3Out
    • 30,000 IPv4/IPv6 LPM prefixes on a border leaf switch (EX, FX, and FX2 platforms)
    • 4,000 MAC address EPGs
  • Visore has the following improvements:
    • The Visore tool has a new, modernized look-and-feel.
    • You can now search by class, distinguished name, or URL, instead of only class and distinguished name. After you find an object, you can make the object a favorite, which enables you to go to your list of favorites and load the object from there.
    • You can now view the JSON response of your last query; previously you could only view the XML response.
    • Visore by default displays all of the properties, even those that have no value. You can now hide the properties that do not have a value.
    • You can now navigate the distinguished name using the bread crumbs, which is simpler and easier to use.
    • You can now only view a distinguished name's stats, faults, or health if there is applicable data.

New software features: Software integration

New Software Features: Virtualization

  • Cisco ACI integration with Cisco's SD-WAN: vMotion integration enables tenant admins to apply preconfigured policies to specify the levels of packet loss, jitter, and latency for tenant traffic over the WAN. When a WAN SLA policy is applied to tenant traffic, the Cisco APIC sends the configured policies to a vManage controller. The vManage controller, which is configured as an external device manager that provides Cisco Software-Defined Wide Area Network (SD-WAN) capability, chooses the best possible WAN link that meets the loss, jitter, and latency parameters specified in the SLA policy.
  • Cisco ACI with Cisco UCSM integration: You can automate networking policies on Cisco UCS devices. To do so, you integrate Cisco UCSM into the Cisco Application Centric Infrastructure (ACI) fabric. Cisco APIC takes hypervisor NIC information from the Cisco UCSM and a virtual machine manager (VMM). The automation applies to all the devices that the Cisco UCSM manages.
    • If you use Cisco Application Virtual Switch (AVS) or Microsoft System Center Virtual Machine Manager (SCVMM), you also must associate a switch manager with the VMM.
    • If you use Cisco ACI Virtual Edge or VMware vSphere Distributed Switch (VDS), make the association if you do not use LLDP or CDP in your VMM domain.

Resolved caveats


A shard is stuck and cannot move forward. Continuous handleTxCheckTimeout can be seen in the source log with no new transaction being sent to the replica.


During same-VC-cross-DC VM migrations, there is one task triggered per VM to verify its migration status. However, if too many VMs are migrated within a short period of time, the amount of tasks due to the bulk migration might exceed the size of the task queue, which leads to huge amount of faults.


NGINX gets has an out of memory issue approximately every 10 hours due to the memory usage growing to up to 8GB.


If many faults flap on the switch nodes, the GUI may run slowly and have poor response.


ARP poisoning occurs when return traffic from the uplink to the SNAT interface is sent back to the uplink. All of the uplink IP addresses appear as coming from the SNAT MAC address.


The GUI doesn't display hypervisor details on double clicking if there are more than 16.


A remote leaf switch configures a static route to the Cisco APIC based on which Cisco APIC replies for its DHCP. This route does not get deleted after the remote leaf switch is commissioned. This behavior might cause the static route to get redistributed to the IPN, which then points the route to this specific IPN back to the remote leaf switch.

Because the Cisco APIC in question and remote leaf switch will now have a routing issue, they cannot communicate. From this Cisco APIC, the remote leaf switch cannot be managed.


After creating and deleting the pod 2 TEP Pool under "Pod Fabric Setup Policy", the pod 1 Infra VLAN leaks to the overlay-1 route map on the leaf switch.


After upgrading to the 3.2(3n) release, fault F608054 appeared, indicating fsm-sync-with-quorum-fsm-fail.


APIC accepts the "_" (underscore) symbol as delimiter for VMware VMM domain integration, even though it is not a supported symbol.This is an enhancement request to implement a check in the APIC GUI to not accept "_".


Changing the "DH Param" setting on the APIC from the default "None" results in the following pop-up error message:

Error: 400 - Failed to update communication configuration (Configuration not valid for server (Nginx))

The configuration does not get applied.


Changing the timezone on the APIC leads to different timezones on the APICs and leaf switches. For example, choosing the Europe/Istanbul timezone leads to the time on the leaf/spine switches to be GMT+3 (which is correct, due to daylight saving time), while on the APIC the timezone shows as +2.This causes an issue with syslog messages sent from the APICs and leaf switches, as they have different timestamps.


Currently, the ACI upgrade process involves uploading of the images to the spine/leaf switch and activating the newer code.These upload/activating procedures cannot be separated and must be performed in one maintenance window. This causes the maintenance window to be longer.


Database files and logs related to a previous upgrade are not collected in the techsupport files.


Database files and logs related to a previous upgrade are not collected in the techsupport files.


If a techsupport that is exported to the APIC and is generated after a configuration snapshot, rolling back to that snapshot removes the techsupport configuration and the logs that are saved to the APIC. The exported logs should instead be preserved as long as possible, and rolling back the snapshot should not cause the log/trace removal.


If multiple path attachments (l2extRsPathL2OutAtt) are configured for the same interface under different node profiles (l2extLNodeP) or different interface profiles (l2extLIfP), the configuration is blocked by the policy manager (PM), but is allowed by the policy distributed (PD), causing an inconsistency between the components. As a result, the posted configuration will not be displayed under the GUI.

Additionally, depending on the APIC version, the configuration push failure from the PD to PM may cause all subsequent configurations for the shard to fail. As a result, all configuration changes for a particular tenant may appear to fail.


If the infra-VLAN in the input to acc-provision is different from what it detects on the fabric, acc-provision warns you that infra-VLAN configuration is incorrect, but it still uses the requested VLAN as the desired value for infra-VLAN.


If you configure two OSPF L3Outs with external network with one being border leaf 101 and the other being border leaf 102, sometimes a route is learned from border leaf 102, but the "Visibility & Troubleshooting" shows the destination border leaf as border leaf 101.


In an ACI GOLF deployment, the external GOLF router will advertise its routes using BGP EVPN to the spine switches, which will reoriginate those into VPNv4 and advertise to the leaf switches that should import them. These VPNv4 routes will have the VXLAN VNID of the originating VRF instance set in the "label" field, and in the hardware a rewrite entry is added for this VNID corresponing to the VXLAN tunnel that extends to the GOLF router.

Due to hardware restrictions within a single VRF instance on an ACI leaf switch, there can only be one VXLAN VNID rewrite entry per tunnel. If two VRF instances in ACI are configured with the same route-target import/export policy, then the leaf switch will attempt to import the VPNv4 routes in the same VRF instance with different VXLAN VNIDs.

Because only a single rewrite VNID can be installed per tunnel per VRF instance, this will prevent some rewrite VNIDs from being installed in hardware. As a result, you may see that traffic that is from the leaf switch going to the GOLF router will either have a VNID of 0 or will have the wrong VNID set.


On the Mgmt tenant, when trying to configure monitoring polices, the button does not take any actions and monitoring policies cannot be configured on this tenant.


PLR fails after upgrading to a Cisco APIC 4.0 release.


SNMP commands on the CLI causes an error to display.


The @ symbol cannot be used when configuring an SNMP community due to the symbol being interpreted as a delimiter for the context. Using @ results in unknown context errors incrementing in the "show snmp" command.


The APIC syslog does not log the username for login/logout attempts.


The Cisco APIC sets the mcast attribute to "yes" after disabling PIM on an L3Out. However, the APIC should instead set the attribute to "no."


The current implementation of APIC techsupport collects the latest 10,000 logs of audit and faults. That works for certain scenarios. However, there are many troubleshooting scenarios that need to get all records of the audits, faults, and events. This enhancement is requesting the implementation of the collection of all audit, fault, and event logs in separate compressed gzip files.


The Dashboard UI can display negative fault counts or wrong fault counts inconsistently with the "Fault Summary" UI.


The F0053 fault is generated.


The F3222 fault displays after you delete a pool.


The following issues are observed:

  • The LLDPAD process crashes on an APIC.
  • The LLDPAD service cannot start anymore after the core, not even after a cold reboot of the APIC.
  • EDAC errors are observed in dmesg prior to the LLDPAD crash.
  • The LLDPAD crash causes the directly-connected leaf switches to lose the APIC controller LLDP adjacency (lldpCtrlrAdjEp).
  • eth2-1 and eth2-2 do not receive any frames/packets anymore, as observed with the ifconfig command.
    • TX counters are going up.
  • The APIC gets a reduced health score in avread (health: 2) and perceives the cluster state incorrectly due to there being no RX frames/packets.


The Name column is missing on the Subnets table from the External Network Policy screen in the Routed Outside Policy screen.

The Name field is missing on the Create Subnet panel.


The neutron call using ip cidr for the --allowed-address-pairs feature is not supported with the ACI plugin for OpenStack.


The query lists out incorrect objects that do not match the "eq" or "ne" filter.


The same IP address added under NTP in different formats (with/without leading zeroes) are treated as unique entries on the APIC. A switch will have single entry. Deleting one of the entries on the APIC will delete that entry from the switch.


The 'showconfig' command from the APIC does not print the config and instead generates a traceback. This will result in an invalid user_config file in the APIC 1of3 techsupport file.


The subject and body fields do not allow modification.


There is a lot of lag when entering the command show endpoint ip <ip>.


There is no way to see the next DHCP address to be assigned from a DHCP pool.


This is an enhancement request to include the ethpmFcot MO to ACI leaf switch techsupport files.


This is an enhancement request to include the output of /proc/kpm_err_stat in ACI switch techsupports.


This is an enhancement to set the reload delay timer between the ACI ToRs upgrades if the maximum concurrent nodes is 1.


This is an enhancement to update OpenSSH to version 7.8+ to remediate CVE-2018-15473. More info can be found here:


TPM is supposed to be used to encrypt certain partitions, but on a 4.0 release, the image can be installed without TPM being enabled, and the APIC can also boot up without it.


Traffic returning from a PBR node is redirected back to the PBR node, forming a loop.


When a spine switch is used as an L3Out device to IPN/ISN, in a multi-pod with Cisco ACI Multi-Site configuration, after switching over the sup in a 9508 chassis, a flapping event might be seen in the logs for a DHCP client interface operational state on the L3Out external interface.


When attempting to upload a firmware file to an APIC, an error indicating the "repository [is] over 80% full" appears. Even deleting previously uploaded firmware files does not clear out enough space.


When creating a firmware download job on the APIC GUI by using Admin > Download Tasks > "create outside firmware source" and selecting SCP while your web browser is connected to APIC1, APIC2 might actually be downloading the file.This is an enhancement request for the APIC GUI to indicate which APIC is trying to download the file when the download fails so that further troubleshooting can be done on the correct port.


When the FEX is removed the ACI leaf switch, the FEX status become offline and is completely removed after around 20 minutes. However, the license consumption on the APIC is not updated nor released.


When the TACACS user privilege is admin, the vCenter plugin only gets read permissions when fetching the privileges from the APIC.


When upgrading from some 3.2 or 3.1 releases to 4.0, some or all leaf switch maintenance groups will immediately start upgrading without being user-triggered. This issue occurs as soon as the APICs finish upgrading.


When using a specific out-of-band or in-band contract to only allow certain protocols, all ports are open.


When using the "show vsan-domain detail" command, all interfaces that are configured with "NP" show as "F" mode, and the following error displays at the end of the output:

Error: Invalid RN rsvsanPathAtt-[topology/pod-1/node-1301/sys/conng/path-[10Gb-CCH-Server-VPC-SR227]]


When using the Firefox browser to view "Operations > Visibility & Troubleshooting," the zoom icons are missing on the result page.


Where a trunk port group was created and what was pushed to it cannot be verified within the APIC GUI.


XML special characters in the SNMP location and name are not properly escaped when exported into XML. This causes issues in being able to parse the XML output properly.

In addition, Cisco ACI Multi-Site Orchestrator (MSO) queries the SNMP information and attempts to parse the XML export of the SNMP config. If the SNMP community policy or SNMP location have special characters (&, <, >), pushing policies onto these sites may fail, as MSO cannot parse the XML output.