Tuesday, 27 June 2017

VMware ESXI important topics

VMware ESXI important topics 


Difference between esx and esxi :

ESX (Elastic sky x) :

ESX (Elastic Sky X) is the VMware’s enterprise server virtualization platform. In ESX, VMkernel is the virtualization kernel which is managed by a console operating system which is also called as Service console. Which is linux based and its main purpose is it to provide a Management interface for the host and lot of management agents and other thrid party software agents are installed on the service console to provide  the functionalists like hardware management and monitoring of ESX hypervisor.  




ESXI (Elastic sky x integrated) :

ESXi (Elastic sky X Integrated) is also the VMware’s enterprise server virtualization platform. In ESXi, Service console is removed. All the VMware related agents and third party agents such as management and monitoring agents can also run directly on the VMkernel. ESXi is ultra-thin architecture which is highly reliable and its small code-base allows it to be more secure with less codes to patch. ESXi uses Direct Console User Interface (DCUI) instead of a service console to perform management of ESXi server. ESXi installation will happen very quickly as compared to ESX installation.

Difference between 5,5.1.5.5,6 :




VMFS (virtual machine file system) :

The following are the key features of VMFS:
  • It simplifies the storage issues of virtual machines as multiple virtual machines installed over different ESX servers can share a single shared storage area.
  • Multiple instances of an ESX server run simultaneously and share VMFS.
  • VMFS strongly supports the distributed infrastructure of virtualization by using various VMware services.
VMFS also has some limitations, including:
  • It can only be shared with 64 ESX servers at a time.
  • Logical unit numbers support is limited to a size of 2TB


Block size :
Item                                 Maximum
File size (1MB block size)    256GB
File size (2MB block size)    512GB
File size (4MB block size)    1TB
File size (8MB block size)    2TB minus 512B


VMDK (Virtual machine disk) :


VMware. Type of format. Disk image file. ... VMDK (Virtual Machine Disk) is a fileformat that describes containers for virtual hard disk drives to be used in virtual machines like VMware Workstation or VirtualBox.



VMSN (Virtual machine snapshot) :

The .vmsn file extension is used by the VMware virtualization software as VMware Snapshot State File. This application is popularly used to save information regarding the virtual machine to be used for snapshots or the saved and frozen state of the VM. All these snapshots help in installing guest operating systems since the VMware virtualization software is known as an application that creates virtual computer machines using a single computer. The virtual machines that are created using the software can either be MAC, Linux, Windows and other operating systems. The virtual machines created by this software are not just another application to be accessed; rather, it is a complete computer that has its own memory, network connections, processors, etc. This .vmsn file extension is a bit like the .vmsd extension. The only difference is that while .vmsn deals with the virtual machine’s running state, .vmsdsimply contains the snapshot’s metadata.


Difference between Standard Switches and Distributed Switches :

vSphere Standard switch 

vSphere Standard Switch is used to provide network connectivity for hosts, virtual machines and to handle VMKernel Traffic. Standard switch works with only with one ESXi host. vSphere standard switch bridge traffic internally between virtual machines in VLAN. Standard switch does not require Enterprise plus licensing for usage. This is one of the real advantages for standard switch users. Standard switch is created in host level i.e. we can create and manage vSphere standard switch independently on ESXi host. Inbound traffic shaping is not available as a part in standard switch. Networking vMotion is not available in standard switch.

vSphere Distributed switch

vSphere Distributed switch allows a single virtual switch to connect multiple Esxi hosts. vSphere Distributed switch on a datacenter to handle the networking configuration of multiple hosts at a time from a central place. Distributed switches allow different hosts to use the switch as they exist in same host. It Provides centralized management and monitoring of the network onfiguration of all the ESXi hosts that are associated with the dvswitch. vSphere Distributed switch given priority to traffic and allows other network streams to utilize available bandwidth. vSphere Distributed switch include rollback and recovery for patching and updating network configuration, templates to enable backup and restore for virtual networking configuration. Inbound traffic shaping is possible to apply in distributed switch only. Networking vMotion is used in Distributed switch only.

Networking vMotion – Networking vMotion tracking virtual machine networking state. As a VM moves from host to host on a vNetwork Distributed Switch. It is possible to apply only in Distributed switch only.


Before designing a virtual network for the virtual machines it is very important to know the features, terms and options available in the vmware vsphere for the virtual network design. The prime components of a network design are always the switches. So I will start with the switches. The virtual switches present inside vmware vsphere have similar features to physical switches but at the same time there are some differences as well.


Some differences between physical and virtual switches :-

-> A vswitch does not use the dynamic negotiation protocol for trunk establishment. (DTP or PAgP)
-> A vswitch cannot be connected to another vswitch
-> As a vswitch is not connected to another vswitch, STP is not present in vswitches.
-> NO mac address learning as it already knows the MAC addresses of the attached VMs
-> Traffic received from one uplink is never forwarded another uplink. Hence again there is no need to run STP.
-> A vswitch also doesnt have to perform IGMP snooping as it knows the multicast address of the VMs.

Starting with the switches there are two types of switches present inside vSphere they are
Standard vSphere Virtual Switch or vSwitch or vSS and vSphere Distributed Switch or vDS or dvSwitch(Vsphere 4.1).
Both of the switches reside in vmkernel and provide traffic management for the virtual machines and the management traffic (Vmotion, iSCSi etc.). One of the major differences between the two is that the standard vritual switches vSwitches are managed independently for an individual ESXi host whereas a distributed switch is managed at cluster level. A distributed switch can have several esxi on it if they are in a single cluster.

Port Group – They can be considered as logical separation of VMKernel traffic and VM traffic.
The virtual machines have different types of network adapters. There are three different network adapters used inside virtual machines they are :-
1.) vmxnet adapter – High performance 1gbps adapter. This adapter only works when VMware tools are installed . It is also called para-virtualized driver. The adapter is mentioned as flexible in vmware properties.
2.) vlance adapter – It is a 10/100 mbps network adapter. It is compatible with most of the operating systems and the default adapter till the VMware Tools are not installed.
3.) e1000 adapter – This adapter emulates the intel e1000, it is 1GBps adapter and mostly common in 64 bit VMs.

This introduction will to understand the virtual networking better. In my future posts I will cover the networking concepts and lab in detail.

VM Kernal Switch :

VMkernel ports are special constructs used by the vSphere host to connect with the outside world. They are also known as 'virtual adapters' or 'VMkernel networking Interface'. You might have seen it as ‘vmk' as well which is shortened for VMkernel.
The goal of vmk is to provide some sort of layer 2 or layer 3 services to the vSphere host. VMkernel ports also provides the following services:
·         vMotion Traffic
·         Management Traffic
·         iSCSI Traffic
·         NFS Traffic
·         Fault Tollerance Traffic
·         vSphere Replication Traffic





NIC Teaming :


Include two or more physical NICs in a team to increase the network capacity of a vSphere Standard Switch or standard port group. Configure failover order to determine how network traffic is rerouted in case of adapter failure. Select a load balancing algorithm to determine how the standard switch distributes the traffic between the physical NICs in a team.

About this task

Configure NIC teaming, failover, and load balancing depending on the network configuration on the physical switch and the topology of the standard switch. See Teaming and Failover Policy and Load Balancing Algorithms Available for Virtual Switches for more information.
If you configure the teaming and failover policy on a standard switch, the policy is propagated to all port groups in the switch. If you configure the policy on a standard port group, it overrides the policy inherited from the switch.




Procedure

  1. In the vSphere Web Client, navigate to the host.
  2. On the Manage tab, click Networking, and select Virtual switches.
  3. Navigate to the Teaming and Failover policy for the standard switch, or standard port group.
    Option
    Action
    Standard Switch
    1. Select the switch from the list.
    2. Click Edit settings and select Teaming and failover.
    Standard port group
    1. Select the switch where the port group resides.
    2. From the switch topology diagram, select the standard port group and click Edit settings.
    3. Select Teaming and failover.
    4. Select Override next to the policies that you want to override.
  4. From the Load Balancing drop-down menu, specify how the virtual switch load balances the outgoing traffic between the physical NICs in a team.
    Option
    Description
    Route based on the originating virtual port
    Select an uplink based on the virtual port IDs on the switch. After the virtual switch selects an uplink for a virtual machine or a VMkernel adapter, it always forwards traffic through the same uplink for this virtual machine or VMkernel adapter.
    Route based on IP hash
    Select an uplink based on a hash of the source and destination IP addresses of each packet. For non-IP packets, the switch uses the data at those fields to compute the hash .
    IP-based teaming requires that the physical switch is configured with EtherChannel.
    Route based on source MAC hash
    Select an uplink based on a hash of the source Ethernet.
    Route based on physical NIC load
    Available for distributed port groups or distributed ports. Select an uplink based on the current load of the physical network adapters connected to the port group or port. If an uplink remains busy at 75 percent or higher for 30 seconds, the host proxy switch moves a part of the virtual machine traffic to a physical adapter that has free capacity.
    Use explicit failover order
    From the list of active adapters, always use the highest order uplink that passes failover detection criteria. No actual load balancing is performed with this option.
  5. From the Network Failover Detection drop-down menu, select the method that the virtual switch uses for failover detection.
    Option
    Description
    Link Status only
    Relies only on the link status that the network adapter provides. This option detects failures such as removed cables and physical switch power failures.
    Beacon Probing
    Sends out and listens for beacon probes on all NICs in the team, and uses this information, in addition to link status, to determine link failure.ESXi sends beacon packets every second.
    The NICs must be in an active/active or active/standby configuration because the NICs in an unused state do not participate in beacon probing.
  6. From the Notify Switches drop-down menu, select whether the standard or distributed switch notifies the physical switch in case of a failover.
    Note:
    Set this option to No if a connected virtual machine is using Microsoft Network Load Balancing in unicast mode. No issues exist with Network Load Balancing running in multicast mode.
  7. From the Failback drop-down menu, select whether a physical adapter is returned to active status after recovering from a failure.
    If failback is set to Yes, the default selection, the adapter is returned to active duty immediately upon recovery, displacing the standby adapter that took over its slot, if any.
    If failback is set to No for a standard port, a failed adapter is left inactive after recovery until another currently active adapter fails and must be replaced.
  8. Specify how the uplinks in a team are used when a failover occurs by configuring the Failover Order list.
    If you want to use some uplinks but reserve others for emergencies in case the uplinks in use fail, use the up and down arrow keys to move uplinks into different groups.
    Option
    Description
    Active adapters
    Continue to use the uplink if the network adapter connectivity is up and active.
    Standby adapters
    Use this uplink if one of the active physical adapter is down.
    Unused adapters
    Do not use this uplink.
  9. Click OK.

No comments:

Post a Comment