Cisco APIC 4.2(5k) released

Release 4.2(5k) became available on August 22, 2020.

16 min read
By prox
Cisco APIC 4.2(5k) released

Release 4.2(5k) became available on August 22, 2020.

New software features

Improved Precision Time Protocol support

You can now enable the Precision Time Protocol (PTP) on a leaf switch's front panel ports to connect the PTP nodes, clients, or grandmaster. The PTP implementation on fabric ports are still the same as the previous releases, except that the PTP parameters for fabric ports can now be adjusted. With this change, you can use the Cisco ACI fabric to propagate time synchronization using PTP with Cisco ACI switches as PTP boundary clock nodes. Prior to this release, the only approach Cisco ACI had was to use PTP only within the fabric for the latency measurement feature or to forward PTP multicast or unicast messages transparently as a PTP unaware switch from one leaf switch to another as a tunnel.

Cisco APIC System Management Configuration Guide, Release 4.2(x)
Cisco APIC System Management Configuration Guide, Release 4.2(x)

You can create a link flap policy in interface policies, which sets the state of an access port or fabric port to "error-disable" after the port flaps for specified number of times during a specified interval of time.

This feature is not honored on fabric extender (FEX) host interface (HIF) ports nor on leaf switch models without -EX, -FX, -FX2, -GX, or later designations in the product ID.

Cisco APIC Basic Configuration Guide, Release 4.2(x)
Cisco APIC Basic Configuration Guide, Release 4.2(x)

UCSC-PCIE-IQ10GC Intel X710 Quad Port 10GBase-T network interface card support

You can now use the UCSC-PCIE-IQ10GC Intel X710 Quad Port 10GBase-T network interface card in the Cisco APIC M3/L3 servers for 10GBase-T connectivity to Cisco ACI leaf nodes.

Upgrade enhancements

Various enhancements have been made to the upgrade process, including:

  • The restriction on the number of pods that you can upgrade in parallel has been relaxed so that you can upgrade multiple pods at the same time for pod nodes in Multi-Pod configurations. Switches in a Multi-Pod configuration that are part of the same maintenance group can now be upgraded in parallel.
  • Upgrades or downgrades might be blocked if certain issues are present.
  • Additional information is provided in the GUI for each stage of the APIC upgrade or downgrade process.
  • The default concurrency in a group has changed from 20 to unlimited (the default number of leaf or spine switches that can be upgraded at one time is unlimited).
  • When upgrading nodes in an upgrade group using the GUI, Download Progress field is available in the Work pane, which provides a status on the progress of the download of the firmware for the node upgrade.
Cisco APIC Installation, Upgrade, and Downgrade Guide
Cisco APIC Installation, Upgrade, and Downgrade Guide

Resolved issues


The Cisco APIC setup script will not accept an ID outside of the range of 1 through 12, and the Cisco APIC cannot be added to that pod. This issue will be seen in a multi-pod setup when trying add a Cisco APIC to a pod ID that is not between 1 through 12.


Fault delegates are raised on the Cisco APIC, but the original fault instance is already gone because the affected node has been removed from the fabric.


A previously-working traffic is policy dropped after the subject is modified to have the "no stats" directive.


There is an event manager process crash.


Fault alarms get generated at a higher rate with a lower threshold. There is no functional impact.


The Cisco APIC GUI produces the following error messages when opening an EPG policy:  Received Invalid Json String.  The server returned an unintelligible response. This issue might affect backup/restore functionality.


When configuring local SPAN in access mode using the GUI or CLI and then running the show running-config monitor access session<session> command, the output does not include all source span interfaces.


This is an enhancement to add columns in "Fabric > Inventory> Fabric Membership" to show BGP Route Reflectors for within pod and across pods (external BGP RR).


L3Out encapsulated routed interfaces and routed interfaces do not have any monitoring policy attached to them. As a result, there is no option to change the threshold values of the faults that occur due to these interfaces.


Fibre Channel conversion is allowed on an unsupported switch. The only switch that supports Fibre Channel conversion is the Cisco N9K-C93180YC-FX.


The GUI does not provide a "Revert" option for interfaces that are converted to Fibre Channel.


An app does not get fully removed from all Cisco APICs.


*,G got created in both MRIB and MFDM, is present for nearly 9 minutes, and then got expired.


The policy manager (PM) crashes after upgrading the Cisco APIC, which results in the cluster being diverged.


A switch entered into a bootloop and an upgrade is triggered multiple times if the maintenance policy is pushed with a REST API call that has the incorrect version.


The global QoS class congestion algorithm is always incorrectly shown as 'Tail Drop' even though it changed as WRED. The managed object shows correctly when it changed; this is a cosmetic issue.


Route-map entry on the Cisco ACI Multi-Site speaker spine node to change the BGP next-hop from PTEP to R-TEP for routes advertised by the border leaf node is absent. Routes will be advertised with PTEP to the other site.


Cisco APIC interfaces e2/3 and 2/4 persist in the GUI and the MIT after disabling and enabling the port channel on the VIC.


The login history of local users is not updated in Admin > AAA > Users > (double click on local user) Operational > Session.


  • Leaf or spine switch is stuck in 'downloading-boot-script' status. The node never fully registers and does not become active in the fabric.
  • You can check the status by running cat /mit/sys/summary | grep state on the CLI of the spine or leaf. If the state is set to 'downloading-boot-script' for a long period of time (> 5 minutes), you may be running into this issue.
  • Checking policy element logs on the spine or leaf switch will confirm if the bootscript file cannot be found on the Cisco APIC:
  1. Change directory to /var/log/dme/log.
  2. Grep all svc_ifc_policyelem.log files for "downloadUrl - failed, error=HTTP response code said error".
    If you see this error message, check to make sure all Cisco APICs have the node bootscript files located in /firmware/fwrepos/fwrepo/boot.


Fault F1298 raised and states that "Delivered, Node belongs to different POD". Actually Node belongs to correct POD and fault is misleading.


There is a stale fvIfConn entry after physically removing the ESXi host after a host is removed from the datacenter or VMware vCenter.


The 'Primary VLAN for Micro-Seg' field does not show without putting a check in the Allow Micro-Segmentation check box.


In the Cisco APIC GUI, after removing the Fabric Policy Group from "System > Controllers > Controller Policies > show usage", the option to select the policy disappears, and there is no way in the GUI to re-add the policy.


When you have a single VMM domain deployed in 2 different VMware vCenters in same SSO domain and you uninstall all Cisco ACI Virtual Edge virtual machines on one of the VMware vCenters by using VCPlugin for the VMM domain, then the VCPlugin on the other VMware vCenter for the same VMM domain shows the existing AVE as "not installed". This happens because the cisco-ave and cisco-ave-<vmm-domain>  tags are removed on the other VMware vCenter for the Cisco ACI Virtual Edge virtual machines.


The Cisco APIC GUI does not expose the 'destName' property of the vnsRedirectDest managed object.


After VMware vCenter generates a huge amount of events and after the eventId increments beyond 0xFFFFFFFF, the Cisco APIC VMM manager service may start ignoring the newest event if the eventId is lower than the last biggest event ID that Cisco APIC received. As a result, the changes to virtual distributed switch or AVE would not reflect to the Cisco APIC, causing required policies to not get pushed to the Cisco ACI leaf switch. For AVE, missing those events could put the port in the WAIT_ATTACH_ACK status.


With DHCP in which the node is not properly decommissioned, the DHCP process released the IP address and allocated the IP address to another TEP, which caused a duplicate TEP and caused an outage.


SNMP poll/walk to the Cisco APIC does not work . The error message "unknown username" is received.


After decommissioning/removing a node ID from the Cisco APIC, wait for 10 minutes before re-adding the same node back into fabric. Re-adding the node too early can result in unexpected behavior, such as the node that is being decommissioned does not get wiped properly and ends up retaining the TEP address that was allocated by the Cisco APIC.


The Authentication Type displays as "Use SSH Public/Private Files." However, Cisco APIC acts as a client to the (outside) server, and so "Private" should be the only configurable key in the "SSH Key Contents" area.


Editing a remote location with a private key that doesn’t have a passphrase is blocked due to form validation.


After creating a BGP-peer connectivity profile with the loopback option (no presence loopback on L3Out node) in a vPC setup, the BGP session is getting established with a secondary IP address.


SSD lifetime can be exhausted prematurely if unused Standby slot exists


- After decommissioning a fabric node, it is not displayed in the maintenance group configuration anymore.

- Due to the lingering configuration pointing to the decommissioned node, F1300 gets raised with the description:

"A Fabric Node Group (fabricNodeGrp) configuration was not deployed on the fabric node <#> because: Node Not Registered for Node Group Policies"

- The dn mentioned in the fault will point to a maintenance group (maintgrp).


The per feature container for techsupport "objectstore_debug_info" fails to collect on spine nodes due to an invalid filepath.


After creating a Global Alias Field on an EPG in a user tenant and submitting the change, the tag can be seen as successfully created on the EPG. However, operations such as renaming or deleting do not update the tag after submitting the change.


Code F1527 occurrs in /data/log on a Cisco APIC. After collecting the "show tech file" for the Cisco APIC, the percentage is shown as only 71%.


AAEP gets deleted while changing some other policy in the policy group. This only happens when using Firefox and changing a value in the leaf access port policy group. The issue is not seen when using other browsers.


The MD5 checksum for the downloaded Cisco APIC images is not verified before adding it to the image repository.


Traffic from newly added subnet(s) is allowed on one or more Cisco APIC(s) and blocked on the other one or more Cisco APIC(s). As Ext Mgmt NW Inst Prof Subnets are applied/programmed on all Cisco APICs, traffic should work on all Cisco APICs.


There is a message in the Cisco APIC GUI saying that vleaf_elem has restarted several times and may not have recovered, and there are core files of the vleaf_elem process.


Enhancement request to provide a warning prompt to users if they do a configuration export without enabling AES Encryption.


In the Cisco APIC GUI, under Fabric -> Inventory -> Pod 1 -> Leaf/Spine -> Summary -> Hardware Usage -> Memory, a memory usage value over 80% is colored red.


A switch entered into a bootloop and an upgrade is triggered multiple times if the maintenance policy is pushed with a REST API call that has the incorrect version.


This is a modification on the product to adopt new secure code best practices to enhance the security posture and resiliency of the Cisco Application Policy Infrastructure Controller (APIC).This defect track an enhancement to add the ability to block ICMP Timestamp Requests (type 13) and ICMP Timestamp Replies (type 14)


Inside the /firmware/fwrepos/fwrepo/boot directory, there is a Node-0 bootscript that seemingly points to a random leaf SN, depending on the Cisco APIC from which you're viewing the directory.


The Smart Licensing GUI page fails to load due to the JavaScript function erroring out while trying to parse an invalid LicenseManager object. The JavaScript error can be seen in the browser developer tools - console logs.


AVE is not getting the VTEP IP address from the Cisco APIC. The logs show a "pending pool" and "no free leases".


Fabric > Inventory > Topology > Topology shows the wrong Cisco APIC counts (Active + Standby) in different pods.


The Cisco APIC setup script will not accept an ID outside of the range of 1 through 12, and the Cisco APIC cannot be added to that pod. This issue will be seen in a multi-pod setup when trying add a Cisco APIC to a pod ID that is not between 1 through 12.

CSCvm64933 was filed for similar issue.


Protocol information is not shown in the GUI when a VRF table from the common tenant is being used in any user tenant.


Physical Interface Configuration's VLAN tab shows incorrect VLAN assignments on all ports. Ports with no EPGs deployed will show the entire switch VLAN assignment instead of no assigned VLANs.


When the productSpec of a DVS is changed from Cisco Systems to Vmware Inc as a workaround for bug CSCvr86180, if the VMware vCenter is reloaded after that point, that will result in a change of the object type at the VMware vCenter (DistributedVirtualSwitch to VmwareDistributedVirtualSwitch). That has the effect of the Cisco APIC deleting the hvsLNode the next time it pulls inventory from the VMware vCenter after the VMware vCenter comes back up.

When the productSpec is switched back to Cisco Systems, a new hvsLNode is created with most of the fields left as uninitialized, which raises faults on the DVS. Lnode(DVS) gets deleted on the external VMM controller and the MTU on the DVS is different than the MTU in the policy.

This is a cosmetic issue. There is no functionality impact.


The following error message is seen when configuring: Prepend AS: Error 400 - Invalid lastnum: 1. lastnum must be 0 when criteria is prepend.


A spine switch doesn't advertise the bridge domain or host routes to the GOLF router via BGP, and the bgpPfxLeakP managed object is missing for all bridge domain subnets.


When a multi-pod environment is deployed in a non-home pod, the hyper-v servers cannot establish a successful connection to the leaf switch, and the opflexODev and OpflexIDEp objects are not created on the leaf switch. This results in a traffic outage, as the on-demand EPGs will be removed from the setup.


The following error is encountered when accessing the Infrastructure page in the ACI vCenter plugin after inputting vCenter credentials.

"The Automation SDK is not authenticated"

VMware vCenter plug-in is installed using powerCLI. The following log entry is also seen in vsphere_client_virgo.log on the VMware vCenter:

[ERROR] http-bio-9090-exec-3314 PKIX path validation failed: signature check failed


VMware vCenter is offline according to the Cisco APIC. The Cisco APIC is unable to push port groups into VMware vCenter. The leader Cisco APIC for VMware vCenter connections shows as disconnected. There are faults on the VMM domain related to incorrect credentials, but the credentials are actually correct. The same credentials can be used to log in to the VMware vCenter GUI successfully. The "administrator@vsphere.local" account does not work either, so permissions should not be a problem.


- The configuration is not pushed from the Cisco APIC to RHVM. For example, when attaching a VMM domain to an EPG, the EPG is not created as a logical network in RHVM.

- vmmmgr logs indicate that Worker Q is at 300 with Max Q of 300.

- When the Q reaches 300, it appears this is caused by the class definition 'ifc:vmmmgr:taskCompHvGetHpNicAdjQualCb' using up the entire worker Q.

- There are numerous logs indicating that the sendtoController failed and the Worker is busy.


Associating an EPG to a FEX interface from Fabric->Inventory->Pod1->leaf->interface in the Cisco APIC GUI creates an unexpected tDn. As a side effect, this type of static EPG association will cause an error if you use Cisco APIC CLI to verify the leaf node configuration. The error message can be cleared by deleting all static EPG associations created from the Inventory. Use moquery to verify which configuration needs to be cleared.


Periodically, the OpFlex session disconnects. This Issue was seen in K8 integration with Cisco ACI due to an ARP refresh issue for the host VTEP address.


When trying to assign a description to a FEX downlink/host port using the Config tab in the Cisco APIC GUI, the description will get applied to the GUI, but it will not propagate to the actual interface when queried using the CLI or GUI.


When changing the SNMP policy from policy1 to policy2 and if policy2 has the same SNMP v3 user configured with a different authentication key, the pod policy reports fault F2194 for all switches. The Cisco APICs in the cluster will accept the new policy; however, the switches in the fabric will not and will continue using the older policy1.


Cisco APIC accepts the "_" (underscore) symbol as delimiter for VMware VMM Domain Association, even though it is not a supported symbol. This is an enhancement request to implement a check in the Cisco APIC GUI to not accept "_".


VMware vCenter and the Cisco APIC display different information about the location of the attached virtual machines.


A new APIC-L3 or M3 server will not be able to complete fabric discovery. LLDP, "acidiag verifyapic," and other general checks will not exhibit a problem.

When you check the appliancedirector logs of a Cisco APIC within the cluster to which you are trying to add the affected controller, there will be messages indicating that the rejection is happening due to being unable to parse the certificate subject.


For an EPG containing a static leaf node configuration, the Cisco APIC GUI returns the following error when clicking the health of Fabric Location:

Invalid DN topology/pod-X/node-Y/local/svc-policyelem-id-0/ObservedEthIf, wrong rn prefix ObservedEthIf at position 63


There are recurring crashes and core dumps on different Cisco APICs (which are VMM domain shard leaders), as well as high CPU utilization (around 200% so to 2x maxed out CPU cores) for the VMMMGr process, as well as multiple inv sync issues.

These issues are preventing the VMMMGr process from processing any operational/configuration changes that are made on the RHVs.


When creating a VMware VMM domain and specifying a custom delimiter using the character _ (underscore), it is rejected, even though the help page says it is an acceptable character.


TACACS+ users are unable to login to a Cisco APIC when an AV pair is in use with a dot '.' character in the domain portion. Users may be able to login with minimal permissions if the "Remote user login policy" allows it. The following example shows an AV pair that causes the issue:

shell:domains = aci.domain/admin/
Additionally, NGINX logs on the Cisco APIC show the following log line:
23392||2020-06-16T21:04:56.534944300+00:00||aaa||INFO||||Failed to parse AVPair string (shell:domains = aci.domain/admin/) into required data components - error was Invalid shell:domains string (shell:domains = aci.domain/admin/) received from AAA server||../svc/extXMLApi/src/gen/ifc/app/./pam/||813

This log can be found at /var/log/dme/log/nginx.bin.log on the Cisco APIC.


VMM floating L3Out basic functionality does not work. The L3Out port group on a VMware vCenter does not match the configuration in the Cisco APIC. For example, there can be a VLAN mismatch.

Cisco APIC visore will show missing compEpPConn, and the port-group's hvsExtPol managed object will not form hvsRsEpPD to the L3Out compEpPD.


This product includes a version of Third-party Software that is affected by the vulnerabilities identified by the following Common Vulnerability and Exposures (CVE) IDs:


This bug was opened to address the potential impact on this product.


The /data2 partition is filled up with docker temporary files. Output of df -hu /data2 will indicate 100% usage. Login as root and check the usage under /data2/docker/tmp. Confirm that this is the folder causing the partition to be full.


There is a BootMgr memory leak on a standby Cisco APIC. If the BootMgr process crashes due to being out of memory, it continues to crash, but system will not be rebooted. After the standby Cisco APIC is rebooted by hand, such as by power cycling the host using CIMC, the login prompt of the Cisco APIC will be changed to localhost and you will not be able to log into the standby Cisco APIC.


Tenant > Policies > netflow > netflow exporters

When Tenant has large amount of EPGs configured, such as over one thousand, when navigating to the network exporters pane, when clicking the policy, it takes several  seconds for the application EPG to be displayed. When a lower number of EPGs are present, there is no delay in the EPG being populated.

This is a cosmetic defect due to scale.


After a Cisco APIC upgrade from a pre-4.0 release to a post-4.0 release, connectivity issues occur for devices behind Cisco Application Virtual Edge Switches running on VMWare.


Using the filter feature for application profiles always returns all of the application profiles.


The default firmware policy is not displayed in the GUI after setting the policy, logging out, and logging in again. The field will be blank and there is no area detailing the current default policy.


VMware vCenter Event logs in the Cisco APIC are not visible in release 4.2(4i).


An SNMPD process crash is observed on two of the Cisco APICs in three Cisco APIC cluster.


For a tenant name starting with "infra," such as "infratest," the L3Out create wizard does not allow the user to select a particular VRF. Only overlay-1 is allowed, which is the default for infra.

Another issue is the Add Pod option does not work in this scenario.


The VMM process crashes and produces core files when looking in Admin -> Import/Export -> Export Policies -> Core -> default -> Operational tab.


During a policy upgrade, the upgrade fails for some of the Cisco APICs with the Traceback error "Exception while waiting for turn".

2020-07-19 07:05:35,474|ERROR|28470|installer:577 Exception while waiting for turn:
Traceback (most recent call last):
File "/tmp/tmpIfTqGl/insieme/mgmt/support/insieme/", line 575, in install
File "/tmp/tmpIfTqGl/insieme/mgmt/support/insieme/", line 89, in waitForTurn
thisIndex = ids.index(myId)
ValueError: 0 is not in list

CIMC version recommendations

  • 4.1(1g) CIMC HUU ISO (recommended) for UCS C220/C240 M4 (APIC-L2/M2) and M5 (APIC-L3/M3)
  • 4.1(1d) CIMC HUU ISO for UCS C220 M5 (APIC-L3/M3)
  • 4.1(1c) CIMC HUU ISO for UCS C220 M4 (APIC-L2/M2)
  • 4.0(4e) CIMC HUU ISO for UCS C220 M5 (APIC-L3/M3)
  • 4.0(2g) CIMC HUU ISO for UCS C240 M4 and M5 (APIC-L2/M2 and APIC-L3/M3)
  • 4.0(2g) CIMC HUU ISO for UCS C220 M4 and M5 (APIC-L2/M2 and APIC-L3/M3)
  • 4.0(1a) CIMC HUU ISO for UCS C220 M5 (APIC-L3/M3)
  • 3.0(4l) CIMC HUU ISO (recommended) for UCS C220/C240 M3 (APIC-L1/M1)
  • 3.0(4d) CIMC HUU ISO for UCS C220/C240 M3 and M4 (APIC-L1/M1 and APIC-L2/M2)
  • 3.0(3f) CIMC HUU ISO for UCS C220/C240 M4 (APIC-L2/M2)
  • 3.0(3e) CIMC HUU ISO for UCS C220/C240 M3 (APIC-L1/M1)
  • 2.0(13i) CIMC HUU ISO
  • 2.0(9c) CIMC HUU ISO
  • 2.0(3i) CIMC HUU ISO