Posts

UCS C-Series ESXi Install

Installing ESXi on a Cisco UCS C-Series is pretty much as straightforward as you can get. Pretty similar to any standard data center server out on the market today. But with a USB install, no supported software RAID and no optical drive there can be a few ‘gotchas’ getting everything set up permanently. This step by step install should get you up an running quickly on an external mounted USB drive. This guide also applies to installs on the Cisco Flexible Flash Card.

Cisco Integrated Management Console

Cisco UCS C-Series servers are standalone rack-mounted servers. These servers can be incorporated into management with Cisco Fabric Interconnects but require special planning. Fortunately like many datacenter ready servers, Cisco UCS C-Series servers include their own out-of-band management called CIMC or Cisco Integrated Management Console. CIMC is similar to HP’s iLO and IBM’s IMM. These features allow an administrator to manage a server through the network by remote control regardless if the server is on or if it has an OS. Similar to a directly connected KVM.

The CIMC gives you the ability to see the servers status as well as controls its server state. It also allows you to attach media such as an ISO directly to the machine via a virtual CD/DVD or even a floppy disk. Because of this feature most Cisco UCS C-Series servers do not come with an optical drive. Without an optical drive we must first launch and configure CIMC to install our ESXi hypervisor. Lets go!

(click images for full size – settings are highlighted in yellow)

1.  After first booting the UCS server you will receive a status screen. Press key F8 on your keyboard to tell the server to enter CIMC. The server will continue to initilize and run checks. The screen may change but the server will eventually enter the CIMC Configuration Utility

UCS-C220-s0  

2.  Depending on server build and NIC setup, you will receive different options in this menu. Fill in the IP information for your CIMC access. If you plan on only using one NIC for CIMC management then you will need to make changes to two areas in order to achieve connectivity:

  1. Change NIC redundancy to: None
  2. Change NIC mode to: Dedicated

This allows the physical management port to be used solo and the other NIC’s to be used for ESXi purposes. Note: this does lose redundancy to the CIMC. Use F10 to save the configuration and then ESX to exit and continue booting the CIMC

UCS-C220-s0-1

3.  Slight pause before we continue on. If you need to change these settings in the future, you don’t need to reboot your server. You can access the CIMC Admin > Network Settings to change the settings in the step above.

UCS-C220-s1 

4.  Once you have exited CIMC keep an eye out duirng the boot process – you may see raid and arrays spinning up. If you have not yet configured your RAID settings, hit CTRL + H during this process. This will enter the WebBIOS configuration tool.

UCS-C220-s2

5.  Select RAID Adapter – may be multiple choices depending on  how ordered

UCS-C220-s2-1 

6.  You should now be at the WebBIOS selection menu. You will notice groups of ‘virtual drives’ and ‘physical drives’ Most likely you will not show any virtual drives because you have not yet setup your RAID array. Click on Configuration Wizard to continue

UCS-C220-s2-2

7.  The next few screens will take us through a configuration wizard. The wizard will allow you control over how to create your RAID You can clear out any existing configurations made earlier or if you haven’t

UCS-C220-s2-3

 

8.  Selected yes, cause you’re ready!

UCS-C220-s2-4

9.  If have special RAID or disk requirements you can choose manual configuration to provision that now. I choose Automatic and Redundancy as our UCS C-Series are ordered for local disks to operate at RAID 6 for all drives. This may vary depending on what purpose you are using local storage. If this is your local storage for Guest Operating Systems, you will probably want to choose the most redundant RAID setup to ensure your VM’s are available in case of a hard disk failure. You can check the difference in suggesting in automatic setup by selecting back and changing the options.

UCS-C220-s2-5 

10.  Here, in automatic configuration, this is the suggested configuration. Perfect, RAID 6 with over 1.6 TB in local storage. You can always select back to change settings to see what is recommended. 

UCS-C220-s2-6

11. Just your standard warning confirmation screen.

UCS-C220-s2-7

12.  After completing RAID setup, shutdown the server. Make sure your install medium is inserted before powering up. Cisco Flex Flash installs inside the server. There is also a internal USB slot in the server but I have never been able to get it to work correctly. After POST select F6 for the boot menu. Here you may need to select your install medium – either Cisco Flex Flash or a USB stick in one of the empty USB slots.

UCS-C220-s3

13.  Now we return to CIMC via the network (dont select CIMC Config from boot menu. Click the Server tab. Click on Launch KVM Console or they keyboard icon. KVM will launch showing you the UCS. Click on the VM tab.

  1. Click on Add Image button
  2. Locate the ESXi ISO you want to install
  3. Select the Mapped checkbox

UCS-C220-s3-1

14.  You may need to restart the server if it has stalled looking for media. Once ready, ESXi will load and then prompt you to install to a location you select.

UCS-C220-s3-2

15.  After install, you may now access your ESXi host via the vSphere Client. Select Configuration tab and then the Storage link under Hardware group. Finally click the add

UCS-C220-s4

16.  When the add storage window appears click Disk/LUN. Next.

UCS-C220-s4-1

17.  Select the local disk group we set up earlier (depending on your disk arrangement you may have multiple groups here).

UCS-C220-s4-2

18.  Just a quick review here. Important thing to notice is that ESXi knows this disk is blank. Good sanity check there!

UCS-C220-s4-3

19.  Enter the name for your datastore – localdisk/lun one/iso storage etc.

UCS-C220-s4-4

20.  And done! You should see you UCS C-Series local storage available. Browse the datastore, upload some ISO’s and get to building!

UCS-C220-s4-5

VMware Home Lab Guide Part 2: Nested ESXi Hosts

 

This is Part 2 of a 3 part series of the most common built virtual home labs. Check here for Part 1 and the Complete Guide.

 

In part one of our Home Lab guide we discussed using a software hypervisor on top of an already existing operating system. In this part we will discuss a Type 1 Hypervisor running on a bare metal server. We will be again using nested ESXi hosts to form a ‘simulated’ environment using only one server. This is the setup I’m currently using. I believe it gives you the most bang for your buck. It allows you to save on hardware as well as power. But compared to the Multiple hosts solution in Part 3 of our guide you will be simulating some conditions virtually. For example, most networking will take place “inside” your servers ESXi virtual switches.

The setup and buildout is similar to a Type 2 Hypervisor with the difference of adding a third ESXi host. What? Yes! A third ESXi host. Think of it this way, in Part 1 of the guide we used VMware Workstation to ‘host’ our two ESXi hosts that would server our guest operating machines. The same is true with Nested ESXi Hosts except in lieu of Workstation we will be using a clean install of ESXi on top of a bare metal server. For example: physical sever with ESXi  > ESXi Virtual Machine> Windows Server 2008. Our server with ESXi will be given all of the system resources and then distributed to each of the nested virtual ESXi hosts running on top. Confused? Hopefully the below diagram under Setup helps explain. If you still have trouble wrapping your head around the concept please feel free to ask away in the comments.

The most important issue to be aware of this setup is the strict hardware requirements. You will need to do a fair amount of research into the parts (or server) you buy. Simple features like 64bit virtual machines or USB passthrough will not be possible without compatible hardware. For instance to run 64bit Guest OS a CPU with  EPT (Intel) or RVI (AMD) is required. Also, for USB passthrough a CPU with  Intel VT-d or AMD-VI. Some further tweaks to ESXi may also be necessary depending on hardware choices.

Setup

List of software and virtual machines that will be used to create our nested environment on a bare metal physical server.

System Requirements:

  • VMware ESXi 5.1 or 5.5 .iso with free license
  • Intel VT-D and AMD-VI capable 64 bit processor
  • 16 GB ram (min) 32 GB suggested
  • 500 gigabytes free disk space
  • VMware supported Network Interface Card (NIC) (recommended Intel)

Virtual Machines:

  • 2 x ESXi hosts running version 5.1 or 5.5
  • 1 x Windows Server 2003/2008/2012 for Domain Controller & DNS
  • 1 x Windows Server 2003/2008 for vCenter
  • 1 x FreeNAS Server for iSCSI or NAS storage
  • Various VM’s like CUCM or MS Exchange (optional)

 

Pros

  • Persistent
  • Easily upgradable
  • Closer to enterprise setup
  • Rebuild easily

Cons

  • Need compatible hardware (research a must_
  • Limited Resources
  • Fault Tolerance requires special considerations
  • Semi-Simulated

VMware Home Lab Guide Part 1: Nested Virtual Home Lab

There are so many choices/routes possible when designing a Virtual home lab it is very easy to get overwhelmed. Depending on what you want to accomplish and your budget the possibilities are near endless. There are cheap options for fully nested ESXi hosts using VMware Workstation deployed on old or spare equipment. There are very complex and expensive builds that rival enterprise or SMB solutions with full datacenter like capabilities. They both have pros and cons. The solution you eventually settle for will need to fall within your budget while providing you the features you wish to implement. You will ask yourself questions at every step of the way. Do I really need an environment with high availability? Does using a virtual NAS appliance give me slightly less performance than a physical NAS but at zero cost? Do I need low power always on hosts and devices so my energy bill doesn’t skyrocket? The list is endless. But is extremely important you ask yourself these questions as you begin to design your virtual lab. Looking around the internet will yield tons of blog entries for whitebox build outs for home lab use. In almost every example you will notice people have made design decisions and sacrifices mostly based on budget. Having a baseline of hardware and keeping an eye out for deals while determining your build and topology could make a huge difference in your lab. I am going to cover the three most common VMware home lab solutions and the reasons for choosing each one: A lot of the best features using vSphere come from using multiple ESXi hosts controlled by vCenter. Sure, you could just install ESXi on workstation or a bare metal host and install a bunch of virtual machines. But you will miss out on a lot of the advancements made possible with multiple hosts in your environment. All three solutions below will focus on having the following as a bare minimum:

  • two ESXi hosts
  • vCenter Server
  • NFS or iSCSI shared storage
  • Active Directory and DNS

Once the initial setup is complete you will be able to experience and setup advanced features using vCenter. You can clone machines to use as a quick launch template with preconfigured settings (no more installing upgrades and patches on every machine individually). You can migrate running virtual machines from one host to the other in real time using vMotion. There is a ton to explore here, so lets get started.    

Solution 1: Nested VMware Workstation

Workstation9

VMware Workstation is a type 2 hypervisor (virtualization layer) that runs above an existing operating system. In simple, you install VMware Workstation on top of Windows Server or Windows 7 or 8. Type 1 hypervisorsworkstation run directly on top of bare metal hardware. I will talk about those solutions below. Type 2 hypervisors like VMware Workstation & Player and Oracle VirtualBox rely on the operating system to schedule hardware resource usage. They are typically easy to install and can be running a virtual machine within minutes. Network settings and virtual network adapters may be a little confusing at first but often the default settings will work for most needs. Workstation is not free but does have a free trial. If you pass the VMware VCP, you get a free license! Both VMplayer and VirtulaBox are free and I highly suggest installing one if you are new to virtualization. You can do quite a bit using a simple setup.I have done lab simulations with Microsoft Server 2008 simulating a complete Active Directory and end user environment. These are very powerful tools that are simple and easy enough to dive in and get swimming. The real point of using Workstation is that you can utilize your existing desktop/server to build a “nested ESXi environment” or virtual hardware virtualization. People most commonly refer to this concept using the movie Inception. A dream within a dream. After installing Workstation on your windows computer you would then install two ESXi virtual machines and then you will install guest operating systems on those ESXi hosts. For example Windows 7 > VMware Workstation > ESXi VM > Windows Server 2008. Cool right?

 

Setup

List of software and virtual machines that will be used to create our nested environment on VMware Workstation. You can check the official product faq or getting started guide for exact details. System Requirements:

  • Windows or Linux 64 bit Operating System
  • Intel VT-D and AMD-VI capable 64 bit processor
  • 8 GB ram (min) 16 GB suggested
  • 150 gigabytes free disk space
  • VMware Workstation 9 installed on above

Virtual Machines:

  • 2 x ESXi hosts running version 5.0 or 5.1
  • 1 x Windows Server 2003/2008/2012 for Domain Controller & DNS
  • 1 x Windows Server 2003/2008 for vCenter
  • 1 x FreeNAS Server for iSCSI or NAS storage
  • Various VM’s like CUCM or MS Exchange (optional)

You will have all that running on just one computer. Years ago that would have taken a lot of hardware to replicate. These days it just takes a powerful computer with plenty of resources to share with the virtual machines. Depending on what you are working with you may need to upgrade RAM and disk space. RAM is really cheap these days and is probably the easiest to upgrade your system with. Most of the testing and labs I will focus on will require more RAM than CPU. A lot of your machines will be idle or hardly reserving CPU time. But RAM is always handy as more RAM = more VM’s! With this setup you can use vCenter to manage your two ESXi hosts that will have a windows server running on each. The FreeNAS will provide disk storage to your windows server but will actually use hard drive space on your physical computer. You will be able to use and test advanced features like vMotion, Resource Provisioning but you will not be able to use features like storage vMotion or Fault Tolerance. You are also limited to upgradeability of your mainboard and system. Also, this is probably not a setup you will 24 hours. When I used this setup I would bring everything up (in a required specific order) and complete my testing and labs then power down until next time. But this is the most affordable and easy to setup. If you already have the software and hardware, you could be using your lab in a few hours.  

Pros

  • Cheap
  • Easy Setup
  • Use existing hardware
  • Rebuild easily

Cons

  • Limited Resources
  • Not Persistent
  • Simulated

 

Check out:
Part 2 of the VMware Home Lab Comparison Guide