Tag Archives: vdi

Tuning Windows 8 for EUC deployments

I must preface this post by saying that my goal was to “tune” Windows 8 to achieve IOPS and CPU numbers that were equal to if not better than that of Windows 7. If the Internet is to be believed, this should not have been difficult. The truth is that in the EUC world things are not as simple as they are in the physical desktop world.

One of the easiest ways to reduce CPU utilization in Windows 7 was to disable Aero. I have nothing against Aero, but when you are trying to squeeze as many desktops as you can on each server CPU core it is a luxury you can do without. With Windows 8, Aero is now mandatory (at least it appears that way). No more disabling Aero to squeeze every last bit of performance out of your image.

Aero notwithstanding, tuning Windows 8 is similar in many ways to tuning Windows 7. My tuning is geared towards task worker environments where the focus is on applications.

The following services, unless required, should be disabled (see the VMware View Optimization Guide for Windows 7 for an explanation of each service):

BitLocker Drive Encryption Service
Block Level Backup Engine Service
Diagnostic Policy Service
Home Group Listener
Home Group Provider
IP Helper
Microsoft iSCSI Initiator Service
Network Connectivity Assistant
Secure Socket Tunnrling Protocol Service
Security Center
UPnP Host Service
Windows Backup
Windows Defender
Windows Error Reporting Service
Windows Firewall
Windows Media Player Network Sharing Service
Windows Update
WLAN AutoConfig
WWAN AutoConfig
SSDP Discovery

Place the following commands in a batch file and execute it to remove unneeded scheduled tasks. Details about each task can be reviewed in the Windows Task Schedule prior to deletion if you are curious about what each does.

SCHTASKS /Delete /TN “MicrosoftWindowsApplication ExperienceAitAgent” /F
SCHTASKS /Delete /TN “MicrosoftWindowsApplication ExperienceProgramDataUpdater” /F
SCHTASKS /Delete /TN “MicrosoftWindowsApplication ExperienceStartupAppTask” /F
SCHTASKS /Delete /TN “MicrosoftWindowsAutochkProxy” /F
SCHTASKS /Delete /TN “MicrosoftWindowsBluetoothUninstallDeviceTask” /F
SCHTASKS /Delete /TN “MicrosoftWindowsCustomer Experience Improvement ProgramBthSQM” /F
SCHTASKS /Delete /TN “MicrosoftWindowsCustomer Experience Improvement ProgramConsolidator” /F
SCHTASKS /Delete /TN “MicrosoftWindowsCustomer Experience Improvement ProgramKernelCeipTask” /F
SCHTASKS /Delete /TN “MicrosoftWindowsCustomer Experience Improvement ProgramUsbCeip” /F
SCHTASKS /Delete /TN “MicrosoftWindowsDefragScheduledDefrag” /F
SCHTASKS /Delete /TN “MicrosoftWindowsDiskDiagnosticMicrosoft-Windows-DiskDiagnosticDataCollector” /F
SCHTASKS /Delete /TN “MicrosoftWindowsDiskDiagnosticMicrosoft-Windows-DiskDiagnosticResolver” /F
SCHTASKS /Delete /TN “MicrosoftWindowsFileHistoryFile History (maintenance mode)” /F
SCHTASKS /Delete /TN “MicrosoftWindowsLiveRoamingMaintenanceTask” /F
SCHTASKS /Delete /TN “MicrosoftWindowsLiveRoamingSynchronizeWithStorage” /F
SCHTASKS /Delete /TN “MicrosoftWindowsMaintenanceWinSAT” /F
SCHTASKS /Delete /TN “MicrosoftWindowsMobile Broadband AccountsMNO Metadata Parser” /F
SCHTASKS /Delete /TN “MicrosoftWindowsMobilePCHotStart” /F
SCHTASKS /Delete /TN “MicrosoftWindowsPower Efficiency DiagnosticsAnalyzeSystem” /F
SCHTASKS /Delete /TN “MicrosoftWindowsRasMobilityManager” /F
SCHTASKS /Delete /TN “MicrosoftWindowsSideShowAutoWake” /F
SCHTASKS /Delete /TN “MicrosoftWindowsSideShowGadgetManager” /F
SCHTASKS /Delete /TN “MicrosoftWindowsSideShowSessionAgent” /F
SCHTASKS /Delete /TN “MicrosoftWindowsSideShowSystemDataProviders” /F
SCHTASKS /Delete /TN “MicrosoftWindowsSpacePortSpaceAgentTask” /F
SCHTASKS /Delete /TN “MicrosoftWindowsSystemRestoreSR” /F
SCHTASKS /Delete /TN “MicrosoftWindowsUPnPUPnPHostConfig” /F
SCHTASKS /Delete /TN “MicrosoftWindowsWindows DefenderWindows Defender Cache Maintenance” /F
SCHTASKS /Delete /TN “MicrosoftWindowsWindows DefenderWindows Defender Cleanup” /F
SCHTASKS /Delete /TN “MicrosoftWindowsWindows DefenderWindows Defender Scheduled Scan” /F
SCHTASKS /Delete /TN “MicrosoftWindowsWindows DefenderWindows Defender Verification” /F
SCHTASKS /Delete /TN “MicrosoftWindowsWindows Error ReportingQueueReporting” /F
SCHTASKS /Delete /TN “MicrosoftWindowsWindows Media SharingUpdateLibrary” /F
SCHTASKS /Delete /TN “MicrosoftWindowsWindowsBackupConfigNotification” /F

Use PowerShell to remove the following Appx (Metro) software packages. Wild cards are used to remove packages that are related and include apps that focus on XBox, Zune, Bing, Windows Camera, and Windows Photos.

get-appxpackage -name Microsoft.Bin* | Remove-AppxPackage
get-appxpackage -name Microsoft.XBo* | Remove-AppxPackage
get-appxpackage -name Microsoft.Rea* | Remove-AppxPackage (optional; Microsoft PDF reader)
get-appxpackage -name Microsoft.Zun* | Remove-AppxPackage
get-appxpackage -name microsoft.microsoftsky* | Remove-AppxPackage
get-appxpackage -name Microsoft.Came* | Remove-AppxPackage
get-appxpackage -name microsoft.windowsphotos | Remove-AppxPackage
get-appxpackage -name microsoft.windowscomm* | Remove-AppxPackage

From the Windows Control Panel select “Turn Windows Features On or Off” and disable the following feature:

Windows Gadget Platform

The next settings involve group policies. You can either use traditional group policies applied through Active Directory (AD) or configure the policies on your master image. It is likely that your domain is not yet using Windows Server 2012 domain controllers, so if you want to use AD applied policies you will need to install the Remote Server Administration tools on a Windows 8 desktop (install using “Turn Windows Features On or Off” in the control panel) to edit Windows 8 group policy settings within your domain.

Configure the following Group Policy settings:

Computer Policies – Administrative Templates

System – Internet Communication Management – Internet Communication settings
Turn off access to the store : Enabled
Turn off the Windows Messenger Customer Experience Improvement Program: Enabled
Turn off Windows Customer Experience Improvement Program: Enabled
Turn off Windows Error Reporting: Enabled

System – System Restore
Turn off configuration: Enabled
Turn off System Restore: Enabled

System – Windows File Protection
Set Windows File Protection Scanning: Disabled

System – Windows HotStart
Turn off Windows HotStart: Enabled

Windows Components – Desktop Gadgets
Turn off desktop gadgets: Enabled
Turn off user-installed desktop gadgets: Enabled

Windows Components – Desktop Window Manager
Do not allow Flip3D invocation: Enabled
Do not allow window animations: Enabled
Use solid color for Start background: Enabled

Windows Components – File History
Turn off File History: Enabled

Windows Components – Store
Turn off Automatic Download of updates: Enabled
Turn off the Store application: Enabled

Windows Components – Windows Error Reporting
Disable logging: Enabled
Disable Windows Error Reporting: Enabled

Windows Components – Windows Messenger
Do not automatically start Windows Messenger initially: Enabled

Windows Components – Windows SideShow
Turn off automatic wake: Enabled
Turn off Windows SideShow: Enabled

You have probably noticed that I did not disable any indexing services. When I did this initially I experienced some odd errors within Windows, so for the time being I am leaving it on. With Metro using the search function more and more to find applications and files I think that indexing is likely a critical component of the desktop moving forward.

I will be updating this post over time as I run more tests and learn more about Windows 8. As it is today with these tuning parameters in place I am seeing similar CPU utilization and disk IOPS with Windows 8 as I was in Windows 7. These numbers (9.5 IOPS/desktop and CPU utilization sufficient to run 8 desktops/server CPU core) were observed using LoginVSI “medium” user work load simulations. My master image was Windows Windows 8 x32, 1 vCPU, 1 GB ram, Office 2010, and Adobe Acrobat X, running on VMware vSphere 5.

VMware View 5.1 Storage Accelerator in Action

Early today VMware formally announced the (almost) release of VMware View 5.1. Many assumed that View 5.1 would support vSphere 5 Content Based Read Cache (also known as CBRC); they were correct. For those who have been living under a rock, vSphere 5 has the ability to cache bits of a virtual machine in ram, where latency is measured in nanoseconds and not milliseconds. This is of particular benefit for linked clone virtual machines, where under View 5.x you have up to 1000 clones linked to a single image. Note: CBRC is referred to within VMware View as “VMware View Storage Accelerator”; this is the official term now that View 5.1 has been released.

Andre Leibovici of VMware has had a series of blog posts all about CBRC. Rather than plagiarize all his hard work, I’m going to recommend you visit his site if you want a technical introduction into how CBRC works.

Earlier this week I finished up some testing that shows exactly what CBRC does. The following graphs show IO reduction for three specific scenarios: 2000 desktop boot storm, logon storm, and on demand virus scan storm. View 5.1 allows you to enable caching for either the master replica image OR the master replica image and the persistent disks. My hosts in this case were rather close to overcommitting memory, so I chose to cache the master replica image only to minimize the amount of ram used for the cache. Read this to understand how much ram you will need based on your own settings.

I created these graphs because they are those which show the greatest amount of benefit from CBRC. Remember that much of the EUC storage workload is writes, so I’m looking for read heavy scenarios in order to find out what CBRC can really do.

The stats are all reads of the master replica image measured using ESXTOP. Storage stats are interesting enough, but the truth is if you see less reads at the ESXi host you will see less reads within your storage environment.

The results

So that you have some perspective, you are looking at results for a ESXi host that is running 143 Windows 7 desktops. I was actually testing 2000 desktops at once, but for simplicities sake I am showing the results for only one of my hosts.


Do I really need to explain this? Yes there was still a (small) read spike at the 2 minute mark even with CBRC enabled, but even with that you are looking at over a 95% reduction in reads (red line) to the replica image. Even though vSphere uses at most 2 GB of ram for CBRC, the working set (the data that is actually read) of the master replica image is rather small during boot up.


This is a 90 minute logon storm. Again, the benefits of CBRC during this window are obvious. With CBRC enabled (blue line) the reads to the replica image were again reduced by over 95% on average. This would be of great benefit in environments where logon storms were a frequent occurrence.


Let me preface this graph by saying that you really should be using antivirus solutions that are optimized for EUC environments. This includes vShield Endpoint (with McAfee or Trend Micro plugins) or even McAfee MOVE. I’ve tested them all, and they are a huge improvement over traditional client-based AV tools. Now that I’ve gotten that out of the way, you are looking at an AV scan storm that used the McAfee command line AV client. Each AV session was initiated one right after another, a process which takes about 5-7 seconds per desktop. In this case not only was IO reduced by over 70% (blue line), with CBRC enabled the scans finished in less than a third of the time of the “no CBRC” test. AV scan storms are among the most “stressful” storage tests I do, and CBRC enabled amazing results.

The question is does CBRC change my storage requirements? My opinion: In most cases not really. If I were to show you steady state IO during a Login VSI user simulation test you would see maybe a percent or two reduction in IO to the replica master image, which means that you really can’t adjust your core storage design. I consider CBRC a safety valve that helps you maintain desktop performance during those periods of load that may otherwise affect desktop performance. Given that only a few GB of ram are required, you may find the CBRC a no brainer. As always, test in a lab or with a small pilot first before deploying into production.

– Jason

Antivirus for VDI – McAfee MOVE

Antivirus for virtual desktops is not a fun topic, especially when you are trying to shoehorn as many virtual desktops per CPU core as you can onto a server. Snark from Mac users aside, just about every antivirus platform out there will impact the performance of your workstation in some way, usually cpu, ram, or disk related.

Before I start, yes I know that there are alternatives to McAfee MOVE. McAfee MOVE just happens to be the one I tested since I have access to it and years of experience with ePolicy Orchestrator and VirusScan.

The McAfee MOVE Antivirus solution consists of multiple components, each of which plays a different role in the overall solution:

  • McAfee ePolicy Orchestrator Server (ePO) 4.6 – Enables centralized management of the McAfee software products that comprise the MOVE solution. ePO can be installed on Windows Server 2003 SP2 or newer servers, and McAfee recommends using a dedicated server when managing more than 250 clients.
  • McAfee MOVE Antivirus Offload Server – The MOVE Antivirus Offload Server manages the scanning of files from the virtual desktop environment. McAfee VirusScan 8.8 is installed on the MOVE server and is responsible for performing actual virus scans. The number of MOVE servers required is dependent on the aggregate number of CPU cores present in the hypervisors that host the virtual desktops; the actual sizing requirements will be discussed later in the chapter. McAfee MOVE server requires Windows Server 2008 SP2 or Windows Server 2008R2 SP1.
  • McAfee MOVE Antivirus Agent – The McAfee MOVE Agent is preinstalled on the virtual desktop master image and is responsible for enforcing the antivirus scanning policies as configured within McAfee ePolicy Orchestrator. The agent communicates with the MOVE Antivirus Server to determine if and how a file will be scanned based on the ePO policies. The McAfee MOVE Antivirus Agent supports Windows XP SP3, Windows 7, and Windows Server versions 2003 R2 SP2 and newer.
  • McAfee VirusScan 8.8 – VirusScan 8.8 is an antivirus software package used for traditional host-based virus scanning. It is installed on the McAfee MOVE Antivirus Offload server as well as the other servers that comprise the VMware View test environment.
  • McAfee ePolicy Orchestrator (ePO) Agent – The McAfee ePO agent is used to manage a number of different McAfee products. In the case of this solution, ePO is being used to manage servers and desktops running either the McAfee MOVE Antivirus Agent or McAfee VirusScan 8.8. The ePO agent communicates with the ePO server for management, reporting, and McAfee software deployment tasks. The McAfee ePO agent is preinstalled on the virtual desktop master image.

How MOVE Works

The benefit of the McAfee MOVE solution is that it offloads the scanning of files to a dedicated server, the MOVE Antivirus Offload Server. The MOVE Offload Server maintains a cache of what files have been scanned, eliminating the need to scan the files again regardless of what virtual desktop client makes the request. This differs from traditional host-based antivirus solutions which may maintain a similar cache of scanned files, but only for the benefit of the individual host and not other hosts. I created the below diagram to explain how the different components of the McAfee MOVE solution interact with one another.


McAfee MOVE architecture

The virtual desktop client runs the McAfee MOVE client and the ePO agent. The ePO agent enables remote management of the MOVE client by the ePO server, while the MOVE agent is responsible for identifying files that need to be scanned and requesting the scan from the MOVE Antivirus Offload Server.

The McAfee MOVE Antivirus Offload Server runs the MOVE Server software, VirusScan 8.8, and the ePO agent. The MOVE Antivirus Offload Server is responsible for answering file scanning requests from the MOVE clients, determining if the file has been scanned before, and performing the virus scan operations if required. The ePO agent is used for remote management of the VirusScan 8.8 antivirus platform.

The ePO server runs the ePolicy Orchestrator software, which is the management platform for the components that comprise the McAfee MOVE solution. The policies configured within ePO control the parameters within which MOVE operates, both in terms of the configuration of the product itself and policies that govern how and when files are scanned.

McAfee MOVE Sizing

One concern when deploying McAfee MOVE is the number of MOVE Antivirus Offload Servers that will be required. The number of servers required is dependent on the aggregate number of CPU cores, including hyper-threading, present in the hypervisors that host the virtual desktops. McAfee recommends a specific configuration for each MOVE Antivirus Offload Server:

  • Windows Server 2008 SP2 or Windows Server 2008R2 SP1
  • 4 vCPUs
  • 4 GB of ram

McAfee recommends leveraging Microsoft network load balancing (NLB) services to distribute the scanning workload across the MOVE Antivirus Offload Servers. NLB enables the creation of a single virtual IP that is used in place of the dedicated IP’s associated with the individual MOVE servers. This single IP distributes traffic to multiple McAfee MOVE servers based on the NLB settings and whether or not the server can be reached. The process for configuring Microsoft Windows NLB for Windows Server 2008 (and newer) is described in the Microsoft TechNet article Network Load Balancing Deployment Guide.

The McAfee MOVE Antivirus 2.0 Deployment Guide recommends one MOVE Antivirus Offload Server per every 40 vCPUs in the hypervisor cluster, including those created by the enabling of CPU hyper-threading. If the MOVE Antivirus Offload Servers will be installed on the same hypervisors that host the virtual desktops, ten percent of the vCPUs within the hypervisor cluster must be allocated for their use. This means that the hypervisors that will host the MOVE Antivirus Offload Servers will be able to host fewer virtual desktops than may have been otherwise planned for. A minimum of two MOVE Antivirus Offload Servers is recommended at all times for redundancy, regardless of whether or not the hypervisor cluster requires it based on the sizing calculations. The below table details how the number of MOVE Antivirus Offload Servers required increases as the number of vCPUs in the hypervisor cluster increases. A more detailed explanation of MOVE Offload Server sizing is below:

Hypervisors per cluster

Cores per cluster

vCPU per cluster(hyper-threading)

vCPU required for offload scan servers for a cluster (10% of vCPU)

Number of MOVE  Offload Servers required


























MOVE Offload Server sizing

These figures should be applied on a per-hypervisor cluster basis; if more clusters are created additional McAfee MOVE Antivirus Offload Servers should be deployed and dedicated to the new cluster.

Installing McAfee MOVE

The MOVE Agent and ePO agents are installed on the master desktop image prior to the deployment of the virtual desktops. Both components can be installed after the virtual desktops have been deployed, although the impact this will have on the growth of linked clone persistent disks (if applicable) should be considered.

Once the installation of the MOVE and ePO agents has been completed on the virtual desktop master image, additional steps are required to prepare the image for deployment. The following steps should be performed prior to any redeployment of the virtual desktop master image, or if the McAfee Framework service has been started prior to the shutdown of the virtual desktop in preparation for deployment:

  1. Stop the McAfee Framework service.
  2. Delete value for the registry key AgentGUID located in the location determined by the virtual desktop operating system:
    1. 32-bit Windows operating systems — HKEY_LOCAL_MACHINESOFTWARENetwork AssociatesePolicy OrchestratorAgent (32-bit)
    2. 64-bit Windows operating systems — HKEY_LOCAL_MACHINESOFTWAREWow6432NodeNetwork AssociatesePolicy OrchestratorAgent (64-bit)
  3. Power down the workstation and deploy as necessary.

The next time the agent service is started the virtual desktop will generate a new AgentGUID value which will ensure it is able to be managed by McAfee ePolicy Orchestrator.

VMware DRS Rules – MOVE Offload Servers

McAfee recommends that the VMware Distributed Resource Scheduler (DRS) be disabled for the virtual MOVE Antivirus Offload Server guests as scanning activities would be interrupted if a DRS-initiated vMotion were to occur. To accomplish this but still leave DRS enabled for the virtual desktops, a DRS rule was created for each MOVE Antivirus Offload Server that binds the server to a specific hypervisor. To create the DRS rules you must first create virtual machine and host DRS groups; the image below shows the DRS groups as they appear in the DRS Groups Manager tab after they are created. In order to bind a specific virtual server to a specific hypervisor you must create individual DRS group for each hypervisor and each virtual server. These rules and groups are created on a per-cluster basis.


DRS Groups Manager – DRS Rules

Once the DRS groups have been configured you can then create the DRS rules that will bind the MOVE Antivirus Offload Servers to a specific hypervisor. Figure 91 displays a completed DRS rule that binds VDI-MOVE-01, a MOVE Antivirus Offload Server, to hypervisor vJason1. The option Should run on hosts in group is selected rather than Must run on hosts in group to ensure that VMware High Availability (HA) will power on the MOVE Antivirus Offload Server were a HA event involving the hypervisor hosting the MOVE Antivirus Offload Server to occur. You must create a DRS rule for each MOVE Antivirus Offload Server within the cluster.


DRS Rules

MOVE Antivirus Offload Servers

The MOVE Antivirus Offload Server software and VirusScan 8.8 were deployed on servers running Windows Server 2008R2 SP1. The MOVE Antivirus Offload Servers were added to a Microsoft network load balancing (NLB) cluster, per the recommendations from McAfee. The figure below shows the Network Load Balancing Manager interface for the MOVE Antivirus Offload Server NLB cluster. That cluster contains two member servers, VDI-MOVE-01 and VDI-MOVE-02. The virtual IP for the NLB cluster, in the example provided, is what the MOVE clients will use when contacting the MOVE Antivirus Offload Servers.


NLB Cluster containing McAfee MOVE Offload Servers

McAfee ePolicy Orchestrator Configuration

McAfee ePolicy Orchestrator was used to provide a central point of management and reporting for the virtual desktops within the test environment. The figure below shows the System Tree, which provides a hierarchal view of the clients that are being managed by the ePO server.


ePO System Tree View

ePO clients are placed into different groups within the system tree based on default placement rules, automated placement rules, or manually by the ePO administrator. For the purpose of the testing, ePO was configured to place the virtual desktop computers in the appropriate group based on what organizational unit (OU) they reside in within Active Directory. The figure below shows the Synchronization Settings for the ePO group Pool A.


ePO Group Synchronization Settings

ePO is configured to synchronize the ePO group with the computer accounts found in the organizational unit Pool A, which is located in the parent organizational unit Desktops. The Pool A desktops computer accounts were placed in that organizational unit by VMware View when desktop Pool A was created. The reason why the virtual desktops are placed in different groups is in case an additional hypervisor cluster is added; a new cluster would use different MOVE Antivirus Offload Servers and require a unique MOVE ePO policy. The image below shows the Assigned Policies tab for the group Pool A. What is being shown in this case are the policies that are related to the MOVE Client, that are assigned to the Pool A ePO group.


ePO Assigned Policies for Pool A

ePO policies are what are used to control the configuration of McAfee products that support ePO, which includes the MOVE agent. To configure the MOVE Agent on the virtual desktops the policy entries shown in the next two images were configured.


MOVE Agent Policy – General Settings

The highlighted value displayed on the policy General tab is the IP address of the MOVE Antivirus Offload Server NLB cluster previously shown in Figure 92. The IP address must be used; the MOVE Agent does not support the use of DNS names when identifying what MOVE Antivirus Offload Server to use.

The second part of the policy that needed updated was the Scan Items tab, which is shown below.


MOVE Agent Policy – Scan Items

VMware KB Article 1027713, the VMware technical note Anti-Virus Practices for VMware View, and the McAfee MOVE Antivirus 2.0.0 Deployment Guide contain information about files and processes that should be excluded from antivirus scanning. These recommendations were made because the scanning of these files prevented various aspects of the virtual desktops, including the antivirus software, from working correctly. These recommendations were incorporated into the path and process exclusion settings in the McAfee MOVE agent policy. The list of items excluded from scanning includes:


  • Pcoip_server_win32.exe
  • UserProfileManager.exe
  • Winlogon.exe
  • Wsnm.exe
  • Wsnm_jms.exe
  • Wssm.exe


  • McAfeeCommon Framework
  • Pagefile.sys
  • %systemroot%System32Spool (replace %systemroot% with actual Windows directory)
  • %systemroot%SoftwareDistributionDatastore (replace %systemroot% with actual Windows directory)
  • %allusersprofile%NTUser.pol
  • %systemroot%system32GroupPolicyregistry.pol (replace %systemroot% with actual Windows directory)

Once the policies are configured and associated with the appropriate system tree group, the clients should begin to report into the ePO server as shown below.


ePO – Pool A Systems

The Managed State and Last Communication columns indicate if a client is being managed by ePO and when the last time was that client communicated with the ePO server.

McAfee MOVE – Test Results

The McAfee MOVE solution was tested by deploying desktops both with and without the MOVE Agent installed on the master image. Once the desktops were deployed and the virtual desktops all appeared as “managed” in the ePO console, a popular VDI workload generator was used to simulate a user logon storm and steady state workload. The virtual desktops were logged in sequentially over the course of one hour, and the test workload ran for one full hour after the last desktop was logged in and a steady state user load was achieved. Both tests used identical settings; the only difference was whether or not the MOVE agent was installed on the virtual desktops. Three metrics are displayed: storage processor IOPS, ESXi % Processor Time, and ESXi GAVG.

– Storage Processor IOPS

The graph below provides a comparison of the total number of IOPS of both storage processors observed during the tests. The results both tests are shown.

McAfee MOVE – Storage Processor IOPS Comparison

There was no significant difference between the storage processor IOPS observed during either of the the tests.  There was a small increase in IOPS during the logon storm phase of the test associated with the MOVE Antivirus Offload Server needing to scan a number of files for the first time. By the time that the logon storm had completed the MOVE Antivirus Offload Server had cached the scan results for these files, and scanning was not required again on subsequent desktops. This is evident in the IOPS observed during the steady state phase as the IOPS observed varied by less than two percent.

– ESXi – % Processor Time

The image below displays the average ESXi CPU load that was observed during the tests.


McAfee MOVE – ESXi CPU Load

The CPU load results were similar for both tests. A slightly higher CPU load was observed during the first half of the login storm, which can be attributed to the increased antivirus scanning that was occurring during that time period as the antivirus cache was established. As the MOVE Antivirus Offload Server built a cache of files that had been scanned the amount of scans that were required decreased along with the ESXI server CPU load. The CPU load observed during the steady state phase was similar between both tests.

– ESXi – GAVG (disk response time observed at the hypervisor level)

The next figure displays the average ESXi disk response time, also referred to as the GAVG, observed during the tests. The desktops were deployed as linked clones so the response time for the replica LUN and the linked clone LUN are displayed.


The disk response times observed during the both tests were similar for the replica and linked clone LUNs during both the logon storm and steady state phases of the test.


McAfee MOVE provided file level antivirus protection with very little noticeable impact to the virtual desktop. I expected the performance numbers to stabilize as the MOVE cache warmed up, and based on the metrics provided it is obvious that they did. All in all I was pleased with the performance I saw and I would recommend that anyone interested in antivirus designed for VDI look at MOVE and see if it meets their needs. If you are already using ePO you can have MOVE up and running in less than an afternoon.

The McAfee MOVE agent installed on the virtual desktops required less than 29 MB of space and the related services utilized approximately 22 MB of memory and no processor time at idle. When compared to the disk, memory, and CPU utilization of the traditional McAfee VirusScan client as observed during my tests, the McAfee MOVE agent used 75 percent less disk space and 60 percent less memory. This does not include the impact of the VirusScan on-access scanner, which was observed utilizing up to 25 percent of CPU time and 220 MB of ram at random intervals. Since the MOVE agent offloads this activity to the MOVE Antivirus Offload Server, the impact on the desktops is drastically reduced.

Whether you look into MOVE or a competing product, it is worth your time to look at “new generation” antivirus solutions for your VDI deployments.

Additional References


· VMware View Architecture Planning

· VMware View Installation

· VMware View Administration

· VMware View Security

· VMware View Upgrades

· VMware View Integration

· VMware View Windows XP Deployment Guide

· VMware View Optimization Guide for Windows 7

· vSphere Installation and Setup Guide

· Anti-Virus Practices for VMware View

· VMware KB Article 1027713


· McAfee MOVE Antivirus 2.0.0 Product Guide

· McAfee MOVE Antivirus 2.0.0 Software Release Notes

· McAfee MOVE Antivirus 2.0.0 Deployment Guide

How to fix some (common?) VMware View problems

While I have worked at EMC  less than 3 months I have already created and destroyed about 8000 desktops doing some VMware View – EMC VNX testing. The goal of my work is to validate the performance of View running on the VNX platform and document my findings.

As of late I have been working with the some performance tuning that increases the number of desktops that View will configure/refresh/recompose at once. Page 18 of the VMware View 5 Performance and Best Practices document details a couple of changes you can make in ADAM that will speed up these actions. VMware has not responded to my request about the specifics of the values, but based on my experience the pae-SVICreationRampFactor increases the number of desktops that View will deploy at once and the pae-SVICreationRampFactor affects refreshes and recomposes (best as I can tell). The location of these values within ADAM is displayed in the image below.


As VMware states in the Performance and Best Practices document your vCenter needs to be capable of this additional provisioning load as you will be doubling (and then some) the typical number of operations View performs. While I had no problems with vCenter itself, I did have issues with ESXi5 hosts being disconnected from vCenter even after increasing the timeout from 30 to 60 seconds. I am going to attempt to gather some ESXTOP data to see if the issue is related to the service console running out of ram or something else.

The side affect of these hosts being disconnected is that it means I end up with a number of desktops in a partially configured state, some of which View cannot fix. In some extreme cases you can’t remove these desktops from the View instance through the GUI, which means you must edit the View ADAM instance and View Composer database directly.

The orphaned data you may end up with includes:

  • View desktop and disk information in ADAM
  • View desktop, disk, and outstanding task information in the View Composer database
  • Computer accounts in Active Directory

Removing this information is important as it can affect your ability to deploy desktops with the affected names again and stuck View Composer tasks can drag the performance of Composer in general to a crawl (trust me I’ve seen it).

Thankfully VMware has created a KB article that explains how to search for these orphaned desktops and remove their information. For the purpose of this article I’m going to assume that you have orphaned desktops that you cannot access; the KB article provides additional options that cover situations where you are able to log into the desktop you wish to remove.

Warning: Before you delete entries from either database, make sure you have a current backup of the database and disable provisioning for the pool in View Manager.

Removing the virtual machine from the ADAM database

Find the virtual machine’s GUID stored in ADAM:

  • Log in to the machine hosting your VMware View Connection Server through the VMware Infrastructure Client or Microsoft RDP.
  • Open the ADAM Active Directory Service Interfaces Editor.
  • In a Windows 2003 Server, click Start > Programs > ADAM > ADAM ADSI Edit.
  • In a Windows 2008 Server, click Start > All Programs >Administrator Tools > ADSI Edit.
  • Right-click ADAM ADSI Edit and click Connect to.
  • Ensure that the Select or type a domain or server option is selected and the destination points to localhost.
  • Select Distinguished Name (DN) or naming context and type dc=vdi, dc=vmware, dc=int.
  • Run a query against OU=Servers, DC=vdi, DC=vmware, DC=int with the this string: (&(objectClass=pae-VM)(pae-displayname=<Virtual Machine name>))
    • Note: Replace <Virtual Machine Name> with the name of the virtual machine you are searching for.  You may use * or ? as wildcards to match multiple Desktops
  • Record the cn=<GUID>.

Take a complete backup of ADAM and Composer database. For more information, see Performing an end-to-end backup and restore for View Manager 3.x/4.x (1008046).

Delete the pae-VM object from the ADAM database:

  • Open the ADAM Active Directory Service Interfaces Editor.
  • In a Windows 2003 Server, click Start > Programs > ADAM > ADAM ADSI Edit.
  • In a Windows 2008 Server, c lick Start > All Programs > Administrator Tools > ADSI Edit.
  • Right-click ADAM ADSI Edit and click Connect to.
  • Choose Distinguished name (DN) or naming context and type dc=vdi, dc=vmware, dc=int.
  • Locate the OU=SERVERS container.
  • Locate the corresponding virtual machine’s GUID (from above) in the list which can be sorted in ascending or descending order, choose Properties and check the pae-DisplayName Attribute to verify the corresponding linked clone virtual machine object.
  • Delete the pae-VM object.
  • Note: Check if there are entries under OU=Desktops and OU=Applications in the ADAM database.

Removing the linked clone references from the View Composer database

In View 4.5 and later, use the SviConfig RemoveSviClone command to remove these items:

  • The linked clone database entries from the View Composer database
  • The linked clone machine account from Active Directory
  • The linked clone virtual machine from vCenter Server

Before you remove the linked clone data, make sure that the View Composer service is running. On the View Composer computer, run the SviConfig RemoveSviClone command.

For example: SviConfig -operation=RemoveSviClone -VmName=VM name -AdminUser=the local admin user -AdminPassword= the local admin password -ServerUrl=the View Composer server URL


  • VmName– The name of the virtual machine to remove.
  • AdminUser– The name of the user who is part of the local administrator group. The default value is Administrator.
  • AdminPassword– The password of the administrator used to connect to the View Composer server.
  • ServerUrl – The View Composer server URL. The default value is https://localhost:18443/SviService/v2_0
  • The VmName and AdminPassword parameters are required. AdminUser and ServerUrl are optional.

Note: The location of SviConfig is:

  • In 32-bit servers – <install_drive> Program FilesVMwareVMware View Composer
  • In 64-bit servers – <install_drive> Program Files (x86)VMwareVMware View Composer

In View 4.0.x and earlier, you must manually delete linked-clone data from the View Composer database.

To remove the linked clone references from the View Composer database:

  • Open SQL Manager > Databases > View Composer database > Tables.
  • Open dbo.SVI_VM_NAME table and delete the entire row where the virtual machine is referenced under column NAME.
  • Open dbo.SVI_COMPUTER_NAME table and delete the entire row where the virtual machine is referenced under column NAME.
  • Open dbo.SVI_SIM_CLONE table, find the virtual machine reference under column VM_NAMEand note the ID. If you try to delete this row it complains about other table dependencies.
  • Open dbo.SVI_SC_PDISK_INFO table and delete the entire row where dbo.SVI_SIM_CLONE ID is referenced under column PARENT_ID.
  • Open dbo.SVI_SC_BASE_DISK_KEYS table and delete the entire row where dbo.SVI_SIM_CLONE ID is referenced under column PARENT_ID.
  • If the linked clone was in the process of being deployed when a problem occurred, there may be additional references to the clone left around in the dbo.SVI_TASK_STATE table and dbo.SVI_REQUESTtable:
  • Open dbo.SVI_TASK_STATE table and find the row where dbo.SVI_SIM_CLONE ID is referenced under column SIM_CLONE_ID. Note the REQUEST_IDin that row.
  • Open dbo.SVI_REQUEST table and delete the entire row where dbo.SVI_TASK_STATE REQUEST_IDis referenced ID.
  • Delete the entire row from dbo.SVI_TASK_STATEtable
  • In dbo.SVI_SIM_CLONEtable, delete the entire row where the virtual machine is referenced.
  • Remove the virtual machine from Active Directory Users and Computers.

Deleting the virtual machine from vCenter Server

Note: If you run the SviConfig RemoveSviClone command to remove linked clone data, the virtual machine is removed from vCenter Server. You can skip this task.

To delete the virtual machine from vCenter Server:

  • Log in to vCenter Server using the vSphere Client.
  • Right-click the linked clone virtual machine and click Delete from Disk.

While this process appears difficult it really isn’t. At the end of the day you have information in about 8 places that you need to remove, and the structure of the View Composer and ADAM databases is rather simple. As I said earlier in the article leaving this information in place can affect View Composer performance and prevent desktops from being created as View “sees” the names still in use. I’ve actually made it a point to check all these database locations when I am done with a test set just to make sure that I have a healthy View instance for my next test.

If you decide to start altering the default View settings to speed up recomposes/refreshes/deployments I recommend you make sure your ADAM and SQL backups are current and that you watch out for these issues.

– Jason

VMware View 5 group policies

This is the first post in a small series I am doing that will walk through some of the new features of VMware View 5. This product was announced on August 30th at VMworld, and features a number of improvements.

The subject for today is new group policy templates that have been introduced with VMware View 5. VMware has introduced new (Microsoft Active Directory) group policies that will grant View admins and architects further control over their VDI environment. The big two policies focus on two things: maintaining control over bandwidth utilization by allowing a more granular control over session image quality AND user persona control.

Some of the more prominent policies in these new templates focus on new View 5 features such as:

  • Client side caching: Caches image content on client to avoid retransmission
  • Build to lossless: 0-60 second window for the View client to “build” images to a fully lossless state
    • Perceptually lossless: Known as “build to lossless disabled”. Primarily for task and knowledge workers as well as the majority of desktop use cases. Use when bandwidth efficiency is more important than image quality.
    • Lossless (aka “fully lossless”): Best quality available. Use cases include healthcare imaging, designers, illustrators, etc.
  • Persona management: View 5 Persona Management is designed to extend the use cases for stateless desktops by enabling control over more end user settings than ever before. View admins will be able to “manage settings and files, policies such as access privileges, performance and various other settings, as well as suspend-on-logoff, from a central location”. View Persona Management will maintain this personalization across sessions with higher level of performance than previous options.

Lets get to the policies! I am detailing all of the View 5 policies that are available today, although only the first two templates are what you would call new. Most of these settings will be familiar to existing View admins if not Microsoft admins, so I am just going to list all the settings for the time being. If you want to know more about a setting comment on the article and I will provide more details.

PCoIP Configuration group policy template – pcoip.adm

This template focuses on PCoIP optimization settings and contains machine group policies located in two different sections: “Overridable Administrator Defaults” and “Not Overridable Administrator Settings”. The settings for each section are the same, the only difference is whether or not the values can be overridden.

Top level hierarchy of the PCoIP Session Variables policies:


The settings in detail (Again, the settings are the same for both the “Overridable Administrator Defaults” and “Not Overridable Administrator Settings”):


These settings are all fairly self explanatory, and combine to give the View admin a significant amount of control over View PCoIP client connections.

View Persona Management group policy template – ViewPM.adm

This template is for View 5 Persona Management settings and contains machine group policies located in four different sections: Roaming & Synchronization, Folder Redirection, Desktop UI, and Logging.

Top level hierarchy of the VMware View Persona Management computer policies:


VMware View Persona Management > Roaming & Synchronization – Computer Policies:


VMware View Persona Management > Folder Redirection – Computer Policies:


VMware View Persona Management > Desktop UI – Computer Policies:


VMware View Persona Management > Logging – Computer Policies:


Remaining group policy templates (4 in all) – vdm_agent.adm, vdm_client.adm, vdm_common.adm, and vdm_server.adm

These templates are similar to what was included with View 4.6, and are for controlling general settings of theView agents, clients, servers, and other common settings. The policies are broken down as follows:

Top level hierarchy of the View 5 computer policies:


VMware View Agent Configuration (folder root) – Computer Policies:


VMware View Agent Configuration > Agent Configuration – Computer Policies:


VMware View Client Configuration (folder root) – Computer Policies:


VMware View Client Configuration > Scripting Definitions – Computer Policies:


VMware View Client Configuration > Security Settings – Computer Policies:


VMware View Common Configuration (folder root) – Computer Policies:


VMware View Common Configuration > Log Configuration – Computer Policies:


VMware View Common Configuration > Performance Alarms – Computer Policies:


VMware View Server Configuration (folder root) – Computer Policies:


Top level hierarchy of the View 5 user policies:


VMware View Agent Configuration > Agent Configuration – User Policies:


VMware View Client Configuration (folder root) – User Policies:


VMware View Client Configuration > Scripting Definitions – User Policies:


VMware View Client Configuration > RDP Settings – User Policies:


VMware View Client Configuration > Security Settings – User Policies:



That completes the listing of all the View 5 policies that were in place as of this post. If things change when View 5is officially released I will update this post with the most current information and make a note of any changes of interest.

If you have questions please don’t hesitate to ask! I’ll do my best to get answers for you.

– Jason

VMware View 4.6 Bootcamp Day 9 – View Reference Architecture for Stateless Virtual Desktops

The VMware View Bootcamp ends today with a talk about the View Reference Architecture. Mac Binesh, a Sr. Technical Marketing Manager outlines some of the typical costs associated with and benefits of adopting a virtual desktop infrastructure. I found the content a fitting end to the View Bootcamp series. When I look back at the previous discussions I realize that many of the regular posters have already come to realize the ways that a product such as View can benefit their organization, and not just with regard to CAPEX/OPEX per physical desktop. My notes from the Day 9 video. What is a Stateless Desktop?

  • Referred to as a “floating” desktop in View 4.5/4.6
  • Generic user desktop that is allocated to a user at login
  • User changed to the desktop are discarded at logoff
  • No User installed application
  • The VM returns to the desktop pool upon log off
  • Lower cost per VM can be realized with tiered storage
  • Starting with View 4.5 the “floating” VM can be placed on solid state disk on a blade server; previously the VM could only reside on the SAN

Reference Architecture Goals and Benefits

  • The reference architecture represents a validated VDI solution that was built and tested by VMware
  • The reference architecture represents a realistic desktop workload, a task/knowledge worker
  • Lower the CAPEX (capital expense) costs as measured on a per desktop basis
Benefits of Using Stateless Desktops
  • To the User:
    • Fast login times
    • Fast access to applications
    • Easy to reboot the VM
  • To the IT Department:
    • Improved SLA’s
    • Easier to manage than physical desktops
    • No “storms” (boot, login)
    • No SAN required
    • Scales easily
  • To the Business:
    • Reduced CAPEX and OPEX compared to physical desktops
    • Enhanced user productivity (more stable and consistent desktop experience)
    • Enhanced IT productivity (less to manage)
    • Plus all the other benefits associated with leveraging VDI

Cost Analysis

  • Since 2008 the datacenter hardware required by VMware View has decreased in cost by approximately 75%.
  • CAPEX Datacenter (Hardware) Cost Per Stateless Virtual Desktop (figures from VMware): $242
    • Qty 12: 8 core server w/96GB of ram: $212,796
      • 12 desktops per server core
      • Windows 7 23-bit with 1Gb of ram.
    • Datacenter switch: $11,842
    • SAN (20TB): $69,450
    • Qty 32: 160GB SSD drives for servers: $15,968
    • Total: $309,756 ($242 per desktop)

Key Use Cases

  • Remote Office/Branch Office OR Business Process Outsourcing
    • Reduced costs since desktops and users are centrally managed.
    • Sensitive data stays in the data center.
    • Streamlines application and desktop deployment.
  • Labs, Kiosks, and Training Centers
    • Supports distance learning environments.
    • Rapidly provision desktops.
    • Enhanced security with centralized control and management.
    • Reduced costs and increased control.

View 4.5 Scalability Testing and Results

  • For VMware testing the following architecture was used:
    • Linked clone replica base image resided on local SSD storage
    • Parent base image, user data, and the VM’s .vswp file resided on the SAN (shared storage)
      • .vswp files and infrastructure VM’s  (parent base image and View servers themselves) resided on NFS
      • Standard file shares were used for Windows user data redirections
    • Non persistent automated pool that refreshes immediately
  • Test Strategy and Success Criteria
    • Establish a baseline for desktops per server
      • Gradually increase the number of desktops until the resources of the server are maxed out
      • Reduce the number of desktops until utilization is at an acceptable level (VMware used ~70% CPU utilization as their figure)
    • Start with two servers, then scale out in two server increments
      • Validate application performance with each server scale out
    • Monitor SSD utilization
  • Test Results
    • VMware was able to scale to 96 desktops on an 8 core server (12 VM’s to server core)
    • Varied reboots of VM’s were sustained without consuming 100% of the system resources
    • 10Gbit ethernet combined with the use of local storage made it obvious that the networking environment could handle a fully scaled server load
    • The traffic observed is similar to that of a typical file server
  • VMware View 4.5/4.6 with tiered storage will drastically lower CAPEX, and simplifies cost modeling of desktop virtualization
  • Testking has proven that VMware View 4.5/4.6 provides linear scalability across both compute and storage regardless of scale

References from the presentation plus some I have added:

I hope everyone has enjoyed my summaries of the VMware View Bootcamp videos. I was interested in viewing the videos simply because I am doing increasing amounts of work with VDI and virtualization in general. My focus is more on validating the data center architecture so I find these videos valuable as they remind me of the end to end requirements of the solution as a whole. Follow the Day 9 discussions on the VMware forum page.