Showing posts with label horizon. Show all posts
Showing posts with label horizon. Show all posts

February 8, 2023

Windows Pagefile Done Right

Over the years there has been a lot of information on configuring the pagefile, paging has gone through the evolution on what size and configuration it should be. With virtual machine RAM allocation going higher and higher and a lot of guidance from the likes of Citrix and VMware to potentially build VMs with 64GB+ RAM. Paging is a very interesting topic as setting the pagefile size too high, wastes disk space; removing the page file entirely is bad as well as Windows needs to have a pagefile even though we do not want the system to page. Not having a pagefile equates Windows complaining about not having enough virtual memory if the actual memory is fully allocated. While no one wants to page and systems should be built to minimize paging aka allocating enough memory to let folks operate their applications without paging but systems should be configured to allow some sort of paging as a "use in case" measure. Lastly the out of box configuration of a system managed pagefile is also a bad idea.

Setting the pagefile to system managed in an enterprise type of environment is a bad practice. The reason why? If there are any monitoring tools a lot of monitoring tools track pagefile utilization and it is typically tracked as a percentage. If the pagefile is set to system managed and the minimum/maximum are not set, it can result in monitoring tools reporting excessive pagefile utilization.

Setting the pagefile with the old school approach of setting the pagefile 1.5x the size of the memory results in this configuration. We will hear a lot of complaints about wanting to capture some sort of dump for Microsoft to "analyze". When was the last time Microsoft successfully analyzed a dump with meaningful results? For me it has never happened.

Here is what the drives would look like if we for instance allocated 64GB of RAM to a VM and are using the old school approach of 1.5x RAM for the pagefile:

Here is the error in the eventlog when the pagefile is completely eliminated as virtual memory is low:

What is the happy medium? Setting the pagefile to either 4096MB/4GB for both the minimum and maximum on single user operating systems and 8192MB/8GB for both the minimum maximum on multi-user operating systems.


Why is this the happy medium? I checks all of the boxes as it allows for Windows to have a pagefile if needed, it allows for a minidump to be configured/captured/analyzed in Windows if needed and it is not too large that we feel like we are wasting space.

What about memory dumps? Well if there is a need for some sort of memory dump to be captured and analyzed, Windows can be configured to generate a minidump. There are plenty or articles/blog posts out on the internet on how to configure Windows to produce a minidump.

If you have any thoughts, we would like to hear from you in the comments.

Johnny @mrjohnnyma

July 21, 2022

Don't Treat Your Virtual Desktop Security Like Your Physical Desktop Security

 

This blog post complements a previous blog post I wrote a bit ago talking about not using your physical image in your virtual environment. To check out that one please refer here

Are you using anti-virus, anti-malware, data loss prevention (DLP) software or the like on your virtual desktops? Are you treating them the same as you would on a physical desktop? If the answer was yes to both this is the blog for you. If you are not using anti-virus on your virtual desktop that is a whole other conversation and potential can of worms that needs to be addressed. When running any of the various security tools out there we need to consider the need to configure the proper exclusions to ensure everything runs properly and the users are not getting performance degradation because these exclusions are missing. I see this all of the time that folks are not properly implementing the proper security tool exclusions into their virtual desktop images or they configure the exclusions in the various consoles and they can be shown when asked but machines are not landing in the proper container to actually get the exclusions. I recently was working with a customer that was suffering severely slow/long application launch times in applications such as Outlook, Teams, OneDrive, etc... Upon examining they were capturing things like the Outlook OST, Teams Cache and OneDrive cache into virtual disks stored on a network share as VHD/VHDX files. When users would log onto a virtual desktop and these virtual disks mounted they were being actively scanned by anti-virus and when the Outlook, Teams and OneDrive clients were trying to read the data on the virtual disks performance was hampered because of the scan.

The above not only just applies to non-persistent desktops but to fully persistent desktops as well. I know I will get the response of "aren't persistent desktops the same as physical desktops?" The answer is yes and no. While anything that gets written to the disk is fully stateful and there may or may not be any profile management happening on these desktops. There are still the core virtual desktop components installed to deliver folks the remote display capability with the requisite virtual channels to allow for things like audio/video redirection and offloading. Therefore we still need the proper security tool exclusions to ensure everything is as optimized from the security perspective as possible.  In addition to this with modern-day laptops/desktops, there are potentially a lot more resources in terms of CPU and RAM compared to what is allocated on the virtual desktop side. So, an un-optimized anti-virus/anti-malware utility's impact on the physical side may not be as noticeable.

Long story short, spending a little bit of extra effort to make sure security tools are configured properly will save the headaches of dealing with complaints about bad experience. Just as I said in the previous blog of the common adage "you can't build a house on a bad foundation." This holds very true on this conversation as well.

If you have any thoughts, we would like to hear from you below in the comments.

Johnny @mrjohnnyma

April 19, 2021

Replacing a Self-Signed Certificate on vCenter 7.x +

Purpose:

Demonstration on how to replace the self-signed certificate on VMware vCenter.

Introduction:

Having valid certificates is not only crucial today and going forward, it has been crucial for the last few years as well. Having valid certificates not only ensures that a certain security posture being maintained, it removes any unsightly certificate warnings that make various products unfriendly to use for the administrators/engineers/architects.

I recently made a transition from Nutanix Community Edition (CE) to VMware vSphere in my home lab due to upgrade issues with the most recent release of CE. VMware vSphere 7.x and above resolved an issue where the NIC in an Intel NUC 10 was not detecting during installation and the driver needed to be sideloaded before CE could be installed. This is a continuation of my blog series where I take a focus in on security from a virtualization standpoint. Here is a similar themed blog about how to replace the self-signed certificate in Nutanix Prism Element and Prosim Central.

Today we will talk about how to replace the certificate on vCenter and how significantly easier it has become to do so. Before I start, I am going to preface this that process only applies to VMware vCenter 7.0 and above at the time of this writing. If folks are still running a vCenter 6.5 or 6.7 this will not work there as the process is completely different. Also this not only affects Citrix, it affects VMware Horizon and any other solutions that integrate into vCenter.

How many of us have in the past or even today check the box on this message to acknowledge and trust the self-signed certificate in an on-prem or cloud based full Citrix Studio?


Most of us probably click through it without second thinking why  the warning applies or also just wave it off as “that is not my problem and it is the vSphere team’s problem”. While it may be the vSphere teams problem, security should be a concern from all IT folks as there are always ways that system compromises can easily be fixed if there was a security first mentality. In addition to this, replacing the certificate will remove the warning from vCenter when folks use the vCenter web console. 

In vCenter 7.0 and above it is very easy to replace the certificate so that the warning never even pops up when establishing the Hosting connection string from Studio. 

Configuration Steps:

First we will need to create a certificate, in my case I will be using a domain certificate authority (CA). A certificate from a 3rd party well trusted CA can also be configured in this manner as well. 

I find it easier to generate the CSR on the vCenter and later will have some interesting issues from generating the CSR elsewhere.

Go to vCenter and login as administrator@vsphere.local (this is the only account that has permissions to change the certificate management) On the Top, go to Menu -> Administration

On the left pane -> Click Certificate Management

Under Actions -> Click Generate Certificate Signing Request (CSR)

Fill out the information appropriately -> Click Next

Copy or Download the CSR -> Click Finish

Open a browser and go to https://domainca.fqdn.com/certsrv replacing with your domainca FQDN. In my case it is domain1.domain.lab. -> Click Request a Certificate

Click Advanced Certificate Request

Click Submit a certificate request by using a base-64-encoded CMC or PKCS #10 file, or submit a renewal request by using a base-64-encoded PKCS #7 file

Copy and paste the contents of the CSR file generated earlier into the large field -> Select the appropriate certificate template -> Click Submit

After submitting the certificate may be pending if the CA is configured for approval (as such in my lab). Get the proper approval to issue the certificate

After approval go back to https://domainca.fqdn.com/certsrv -> Click View the Status of a Pending Certificate Request

Click on the Request from earlier -> Click on the Request
Select Base64 encoded -> Download the Certificate

Save with to a location where it can be accessed with an appropriate name –> Click Save


The domain CA’s root and intermediate certificates are required to be exported as .cer as well. In my case, these can be found on the domain controller under Certificate Manager for the Local Machine -> Trusted Root Certificate Authorities Certificates.

Back on vCenter -> Administration -> Certificate Management we need to import the Root and intermediate certificates so that the cert is trusted. -> Click Add

Browse to the root cert -> Click Add

After adding, there are now multiple Trusted Root Certificates

For the Machine Cert section Click Action -> Import and Replace Certificate

Select Replace with external CA certificate where CSR is generated from vCenter Server (private key embedded) as the CSR was generated on the vCenter -> Click Next

On the first field -> Click Browse File and select the certificate that the Domain CA issued. On the second field -> Click Browse File and select the domain CA root certificate that was exported. If there are both root and intermediate certificates they may need to be combined in notepad –> Click Next

vCenter Services will automatically restart which will take a few minutes. It is common to get this message as services are restarted.

When vCenter is back and ready log back in and go to the Certificate Management section. The Machine cert should have an updated expiration date. Track that date and make sure to repeat the process again before the certificate expires to ensure everything continues to run smoothly for any services that integrate with vCenter.

There also are no longer certificate warnings when going to the vSphere web client and when the certificate is viewed, it is the appropriate certificate

The Hosting section in Studio connects to vCenter without a warning now as well.

If you tried to generate the CSR outside of vCenter and went through the process of generating the certificate. You could get this error like I did. There really isn’t a reason why the character was invalid but this is why I recommend generating the CSR on vCenter.

Conclusion:

VMware has made it significantly easier to replace the certificate in vSphere 7.x then it was in 6.x. It makes it almost a no-brainer to do this in my opinion. We didn't need to incure any additional costs as the certificate was generated from a domain CA, but this process would work if you need to get a signed certificate from a third party CA. If we take an overall approoach of focusing in on security in each layer of the infrastructure, we significantly improve the security posture of the entire environment and eliminate as many security flaws in the environment as possible.

We would like to hear from you so feel free to drop us a note if you have any questions.

Johnny @mrjohnnyma

February 10, 2021

Stop Using Windows 10 LTSC



Windows 10 comes in all kinds of variations. If you have ever run Windows 10 in a virtual desktop capacity there has no doubt been a consideration of using the Long Term Service Channel (LTSC) as opposed to the Semi-Annual Channel (SAC) that is traditionally used on physical endpoints. The two supported versions of LTSC out there as of this posting are Windows 10 1607 LTSB (this was the channel name before Microsoft changed it from Branch to Channel) and 1809 LTSC. As you see, these are locked in time versions of Windows 10 that have 5 years of support before upgrades are needed and if/when upgrades are performed and you can leapfrog from one version of LTSC to another version of LTSC to remain supported. The Semi-Annual Channel has servicing timeline of 18 months from release. Meaning you should be off that particular version 18 months after it is released to maintain support. If we look at the benefits of LTSC vs SAC, we can clearly see some very appealing things for it. 

Here are examples of the big ones: 
  • No Windows Store
  • No Microsoft Edge (we are talking about classic Edge not Chromium Edge)
  • No Cortana 
  • No feature updates
  • Less of the question “Who Moved My Cheese” (If you get that reference)

These few items have had administrators and engineers spend countless hours over the years trying to disable them in their virtual desktop environments via scripts, registry changes and disabling services. If we take a step back and look at it, running Windows 10 LTSC gives all of the good things and none of the bad things of running Windows 10 as a virtual desktop. I used to make the comparison as running Windows 7 with a Windows 10 wrapper and if you were running LTSC in your virtual desktop environment you were probably very happy.

Then on February 1st 2018, Microsoft had to come to mess it all up. They posted an article stating that Office 365 Pro Plus would no longer be supported on any version of Windows 10 LTSC effective January 14th 2020. With the wide adoption of Office 365 and the availability of E3/G3 or E5/G5 licensing and the ability to download the offline office clients from these subscriptions it is imperative to maintain support. Microsoft has always taken a stance that LTSC should only be used where the key requirement is that functionality and features don’t change over time. Examples include kiosks, medical systems, industrial process controllers, and air traffic control devices. These systems can have very detrimental effects on functionality if upgraded or are systems that do not have the ability to be updated due to security and network related reasons.

This caused panic and a need to pivot for the IT departments to shift away from LTSC back to the SAC and now administrators and engineers needed figure out a way to be able to test/accommodate these updates to prevent any issues in their virtual desktop environment.

The long story short or TLDR version of this. When deciding if LTSC or SAC version of Windows 10. This really should be a no-brainer and SAC is the only way to go. For those that say they will never go to Office 365 from their on-prem exchange as I have all of these requirements that prevent it, all I can say is “never say never”. You do not want to be reason why an entire environment is not supported and become the bottleneck towards environmental progress. If you are working with a VAR/partner and they tell you that LTSC is the way to go for your virtual desktop image, you may want to re-evaluate that partner as they may be leading you down a bad path.




July 22, 2020

Don't Use Your Physical Image in Your Virtual Environment


Are you using SCCM, WDS or other deployment tools or have been asked to when deploying your virtual desktops or virtual application servers? If so, there can be some serious issues with this. I am often asked about by folks wanting to deploy Citrix or VMware Horizon images using the same image that is used for physical endpoints. Not only is this a bad idea, it can present performance ramifications and also make it so that best practices are not followed.

I always have been a believer that hand building the operating systems for virtual desktops and application delivery servers is the best approach because it ensures we know what went into the image. I understand the grips of manually installing the applications and the extra work but the extra work now can save a lot of headaches later and the reason of "this is how we build out images" is not a good enough reason to justify using the same image in the virtual environment.  Often and in most cases the deployment person and the virtual desktop environment are not the same person. They build images on physical endpoints or on a completely different hypervisor, they never optimize the image and just let things fly. Since these are physical endpoints they have dedicated hardware and rarely if ever do they experience any issues from being unoptimized. In the datacenter, on a virtual desktop or an application delivery server which share host resources with other virtual machines we need to optimize things as much as possible.

Here are two examples of recent environments where there were issues with using SCCM to deploy the same image as physical endpoints:

  1. First was in the medical field and the customer wanted to move from persistent Windows 10 desktops to pooled non-persistent virtual desktops as the administrative overhead of having a persistent desktop and having to administer the desktops with deployment tools was not feasible. Also, when presented with justifying the need of having a persistent desktop pool and having the response be “that is how we have deployed it before” there really was no reason to have it. When it came time to build the Windows 10 non-persistent image, the customer completely disregarded my suggestion on building the Windows 10 base image by hand and used WDS to deploy the “standard” image that is deployed on physical endpoints. The end result was that a known bug in the image in which the start menu stopped responding to left clicks. This bug also existed on physical endpoints but was hacked around by copying profiles over the default profile but when this was done on the non-persistent desktop image, it caused Citrix Profile Management to create temp profiles on each login. After countless days of the customer trying to remediate this, the only successful way to do so was to break out the iso and install the operating system by hand and manually installing the applications and everything is functioning correctly. 
  2. A second example of this was a large law firm migrating from an on-prem Citrix environment to VMware Workspace ONE. When it came time to build their images for the RDS Linked Clone pool they stressed a need to use an existing task sequence that was built for Windows 10 and force it to target a Window Server 2016 operating system. The issue here is that applications were installed before the RDS Session Host role was installed afterwards. It has commonly been a known and best practice for RDS Session Hosts servers that the RDS Session Host role to be installed prior to installing applications due to the need to potentially capture applications settings into the RDS shadow key. In this environment, there are small abnormalities in application behavior even today due to the incorrect installation sequence.

Long story short, when building the images for your virtual desktops and application delivery servers be careful how you approach this. As the common adage is "you can’t build a house on a bad foundation" and doing things incorrectly could lead to a bad user experience.

Johnny @mrjohnnyma