Skip to content

Tag: UCS

UCS FI bootflash clean but with errors

Last month we began the process of upgrading all our UCS domains to the newer 4.1 code release trains to enable new functionality/hardware and resolve some minor bugs. After we completed our first domain we were treated to a new fault code on each Fabric Interconnect, approximately 45 minutes after it’s individual upgrade completed. The fault code thrown was “Partition bootflash on fabric interconnect X is clean but with errors.” As this domain happened to be the very first UCS domain we ever had we assumed it may be an issue with the actual NVRAM in the FI. We triaged the case with TAC and determined that this was just enhanced file system checking in the 4.1 code train and that the fix is to run a e2fsck against the bootflash. In previous versions of UCS this would require the debug utility and manually running some commands. However, in 4.1(2a) and 4.0(4k) Cisco added the ability to run a e2fsck from the UCS CLI on the Fabric Interconnect. We figured this would be a one-off case, isolated to this domain, and didn’t think much of this.

However, we just completed our second domain upgrade last night and low-and-behold one of the two FIs raised the same fault. So now that we’ve encountered this on two of our domains (the second of which is one of our newer domains, but only one of the FIs raised the fault), I’m documenting this for future reference!

Cisco has a public bug report (CCO account required, though) documenting the release of the enhancement:

The process to run a e2fsck is as follows:

  1. Log in to UCS CLI
  2. Connect to the local-mgmt shell for the FI that has the fault.
    connect local-mgmt <a|b>
  3. Issue the reboot command with the e2fsck argument. This will trigger the FI to reload and run a e2fsck at bootup.
    reboot e2fsck

Note: This will obviously cause one of your Fabric Interconnects to be unavailable while it reloads. So ensure you have a maintenance window and verified your equipment is properly connected/setup for failover to survive the reboot.

Note 2: It may take some time still for the fault to clear after the FI reboots from it’s e2fsck. This is normal. If it hasn’t resolved within a few days you should open a TAC case as you may have faulty bootflash.


UCS Reserved VLANs

Anyone that has spent any time with Cisco equipment should just come to expect that there’s a number of VLANs that Cisco reserves for internal use. Cisco UCS is no exception. However, in UCS land, there’s a few curve balls you need to be aware of from version numbers and hardware types.

As I just completed an internal documentation of these VLAN IDs for our department, and since there’s a few places these are documented on Cisco’s web site, I felt it may make sense to just put them here for easy consumption later. In addition to the Cisco official documentation I’ve included my recommendations for reserving additional VLANs to make your life easier.

So, without further blabbering, here’s my current list of reserved VLANs on Cisco UCS, what they’re used for, and whether they can be changed or not..

Official Cisco UCS Port Reservations

VLAN IDDescriptionModifiable
3915-4042Only on Cisco UCS 6454 Fabric Interconnects
Used for internal system communication
See Cisco UCS Configuration Limits
4030-4047Used for internal system communication
See Cisco UCS Manager Network Management Guide
4048Cisco UCS 2.0 and later
See Cisco UCS Manager Network Management Guide
4049Cisco UCS 2.0 and later
Default FCoE Storage Port Native VLAN ID
See Cisco UCS Manager Network Management Guide
4093Cisco UCS 4.0.1(c) and earlier
Used for internal system communication
See Cisco UCS Manager Network Management Guide
4094-4095Used for internal system communication
See Cisco UCS Manager Network Management Guide

My Bonus Recommended Reservations

Cisco UCS domains that are using Fibre Channel, whether by attaching to an existing Fibre Channel SAN fabric or by having an array directly connected to the Fabric Interconnect, will also require VLAN IDs for the VSANs within UCS. As I always design my storage fabrics as an A and B fabric I also create a separate VSAN ID for them (typically 11 and 12, respectively). Therefore, in my UCS domains I also create two VSANs and assign them unique VLAN IDs for the FCoE to run in.

321111Fibre Channel Fabric A
321212Fibre Channel Fabric B
321313Direct-Attached Array FI-A
321414Direct-Attached Array FI-B
Leave a Comment

Booting ESXi in UEFI mode on Cisco UCS

Note: This process should work for Windows and Linux as well.  Verify the EFI boot path for the OS.

Through ESXi 6.0 I have run my hosts in Legacy BIOS mode on UCS.  There just was nothing significant enough worth the hassle of switching to UEFI on UCS (rather: I had more important fires to put out…).  The one feature I did want, Secure Boot, wasn’t supported by ESXi 6.0 and earlier.

vSphere 6.5 introduced support for Secure Boot.   Mike Foley has a great blog post about Secure Boot in ESXi 6.5.  If you are starting your 6.5 upgrade and are using Legacy mode, consider switching to UEFI.  It’s minimal effort and increases the security of your hypervisor.

Since I was working on rolling out a new UCS environment with ESXi 6.5 in a remote office environment, this felt like a great time to switch to UEFI and get the benefits of Secure Boot.  This is not complicated on UCS, but there is a new Boot Policy that must be created.  This policy can be reused for Windows (and other OS).