Customize the RDS Title “Work Resources” using PowerShell on Windows Server

When using Windows Server to access RemoteApps or desktops through RD WebAccess or the new Remote Desktop app, the workspace is titled “Work Resources” by default. This title can be changed using PowerShell cmdlets.

To change the title, open up a new PowerShell window on the connection broker server and import the RemoteDesktop module with the following command.

    Import-Module RemoteDesktop

Next, use the Set-RDWorkspace command to change the workspace name.

    Set-RDWorkspace [-Name] <string> [-ConnectionBroker <string>]  [<CommonParameters>]

For example, you can use the following command to change the workspace name to “My Business RemoteApps”

    Set-RDWorkspace -Name "Contoso RemoteApps" -ConnectionBroker

If you are running multiple Connection Brokers in High Availability mode, you must run this against the active broker. You can use this command:

    Set-RDWorkspace -Name "Contoso RemoteApps" -ConnectionBroker (Get-RDConnectionBrokerHighAvailability).ActiveManagementServer

For more information about the Set-RDWorkspace cmdlet, see the Set-RDSWorkspace reference.

Connect to Skype for Business Online using PowerShell

Goal / Scope

Provide steps to connect to Skype for Business Online using PowerShell and import the session to manage Skype for Business Online and Microsoft Teams.


There are several ways and methods for connecting to Microsoft Online Services and it can get confusing.  This is the method used for managing and maintaining Skype for Business and Microsoft Teams.

Methodology / Process Steps

Download the “Skype for Business Online, PowerShell Module” from Microsoft for the Skype Online Connector (can be found here)

  • Install the module by simply running the downloaded Executable.  This will provide the command “New-CsOnlineSession” needed below.
  • Set the credentials 
$UserCredential = Get-Credential
  • Import the required module for the connection (NOTE:  This is only possible once the Skype for Business Online, PowerShell Module is installed)
Import-Module “C:\Program Files\Common Files\Skype for Business Online\Modules\SkypeOnlineConnec
  • Create the online session and set it to a variable
$sfboSession = New-CsOnlineSession -Credential $UserCredential
  • Import the remote session of the connection created above
Import-PSSession $sfboSession


Known Issues / Troubleshooting

This section is for the issues that have well defined and tested solutions.

Problem: | PowerShell Complains about the specified module not being loaded because no valid module file was found in any module directory.

Solution: | Verify the “Skype for Business Online, PowerShell Module” was properly downloaded and installed.


Download and install the Skype for Business Online Connector module

Monitor Unifi Cloudkey Appliance with Check_MK

Goal / Scope

Improved monitoring of Ubiquity’s Unifi Cloudkey using Check-MK.  


I find monitoring soothing, like a warm blanket.  To monitor everything possible  just feels good.  I want to know what is happening at all times.  I recently purchased a Ubiquity AP and corresponding Cloudkey device to manage it.  I was able to add SNMP information to the configuration of the AP and pull in some simple information, but when it came to the Cloudkey itself, there was nothing.  Determined to get anything more than a simple ping, I found a page that outlines how to enable SNMP on the Unifi Cloudkey.  I took the time to walk through the steps of enabling SNMP on the device only to discover only a handful of items were returned, none of which were very useful.  It was this point I had an idea.  The underlying OS of the Cloudkey is a hacked version of Debian.  Why couldn’t I install the Check_MK agent and report via SSH.  

Methodology / Process Steps

In order to monitor a Ubiquity Cloudkey appliance with a Check_MK agent doesn’t require much.  In fact, it is a similar setup to a standard Linux setup.  If not using SSH, install xinetd, then install the agent.  Verify the firewall ports are open (6656 by default) and query for services from the Check_MK web console.  If using SSH, there are a couple of extra steps, but nothing terrible.

For Standard communication over port 6556

  • Using your favorite scp client, upload the check_mk agent to the appliance
  • Install xinetd for configurations not using SSH (I recommend SSH for security)
  • install check_mk agent using the following command
sudo dpkg -i check_mk-agent-vXXX.deb

NOTE:  sudo is only necessary if not running these commands as root, which is the recommended best practice.

  • verify the firewall port 6556 is open to the monitoring node
  • run a query against the Cloudkey and watch the magic

For SSH encrypted communication

  • generate ssh keys using the process here
  • validate the monitoring user is able to authenticate using the key
  • using a scp client, upload check_mk agent to the appliance
  • Install check_mk agent using the following command
dpkg -i check_mk-agent-vXXX.deb
  • in the WATO configuration on the check_mk webpage, under Host and Service Parameters, select Individual program call instead of agent access.
  • Set a name for the rule.
  • Under the “Command line to execute” enter something like the following:
ssh -i /omd/sites/[site name]/.ssh/[identity file] -o ConnectTimeout=10 -l root $HOSTADDRESS$ check_mk_agent

This will create an SSH connection using the identity file specified with a connection timeout of 10 seconds and the user account of root at the ip address of the host (obtained from the individual host configuration) and run the check_mk agent.

To further improve security (as this is setup using a passwordless key pair), the following entry can be added to the SSH key line in the authorized_keys file on the monitored host

command="/usr/bin/check_mk_agent" ssh-rsa AAAAB3NzaC1y....a83

Known Issues / Troubleshooting

This section is for the issues that have well defined and tested solutions.

Problem: | SSH authentication fails and a prompt is displayed for password 

Solution: | This is almost always due to 1 of 2 things either incorrect permissions on the .ssh directory or misconfigured SSH settings

  • Check the permissions of the .ssh folder and the authorized_keys file itself.  The should be set like the following:

On the device being monitored, both the file and the folder should be owned by the user.

The .ssh folder should be set to 700

The authorized_keys file should be set to 600

On the connecting device, verify the same permissions for the .ssh folder and the [identity] file should be set to 600 similar to the authorized_keys file.

Below is an example of the commands to complete these changes

chmod 700 $HOME/.ssh
chmod go-w $HOME $HOME/.ssh
chmod 600 $HOME/.ssh/authorized_keys
chown 'whoami' $HOME/.ssh /authoried_keys

where $HOME is the current users home directory

  • Check the authorized_key file on the monitored host, 

Problem: | Running a query returns nothing or an error

Solution: | This is usually caused by firewall and / or incorrect installation of the client. 

  • Check the Check_MK node can access the monitored device on the SSH port using telnet. 
telnet [host] [SSH port]
  • Manually run the Check_MK agent from the monitored device.
check_mk_agent test

Verify the output of this command.  It should contain the counters and information about the host.


Datasource programs with Monitoring via SSH example

[warn] _default_ VirtualHost overlap on port 443, the first has precedence

I got this error after adding several SSL sites to my LAMP server:

[warn] _default_ VirtualHost overlap on port 443, the first has precedence

To fix these errors and remove the SSL Certificate warning that was pointing to the wrong site, this statement is required:

NameVirtualHost *:443

in the Apache ports configuration file (ports.conf).  Special thanks to Happy Coding for finding and solving this error earlier on, but it was pointed out in that solution that the file was apache2.conf not ports.conf.  I believe in the older versions of Apache this would have been true.  Please see the following link for details.

WSUS failing to connect new clients

If the statement is true that you learn from your mistakes and failures, then I am getting really smart today!

I have successfully started using templates in VMware for a short while now.  I thought I was getting pretty good at it since I was standing machines up very quickly and with little effort.

Today, however, I discovered a “glitch” with my process.  When applying policy that would have the servers check the local WSUS server for updates I found that none of the servers where showing up in the administration console.  I checked my policy(I have been known to make mistakes before), but everything was just as it should be.  I clicked the refresh button with fingers crossed.  A server showed up!  I must just be too impatient, so I moved on to something more productive.  A bit later I checked again, still only one server.  HHMM, I started thinking about this, and all of the servers should have checked in by now.  I set this policy up days ago.  Ok, now for some more investigation.

I logged on to a server and in the run command window, I typed the following:

wuauclt /detectnow

That should be enough to set things straight, or so I thought.  However, nothing changed, and after 30 minutes, I determined there was something wrong.  I also discovered that the server that was showing up in the WSUS admin console was changing.  I again used my friends on the internet to get some answers.  There were a lot of different suggestions and recommendations, but none of them really fit with my situation.  Then I stumbled across this website:  This was the answer I was looking for.  I exported the registry key that I would be removing, picked a server that didn’t have a good deal of importance at this time, and deleted the following registry key:

I rebooted the server and entered the following command in the run window
wuauclt.exe /resetauthorization /detectnow

The server appeared in the WSUS administration console.  Success!  I repeated this process for all the other servers I created from the template and the results were the same.  I had fixed the WSUS issue of my server not showing up in the console.

VMware vSphere fails to connect to hosts

I had a long afternoon today.  I spent a good part of it troubleshooting my vCenter installation and why the hosts were connecting for about 30 seconds and then showing a disconnected / failed state.  I am hoping with this quick little note, I can save many others from the torture that I endured today.

Here is the scenario:  It is a brand new installation of a SAN with 4 blade servers.  I know that is generic and not very informative, but I am not out to push any products.  Two of the blades are going to be used for a small environment, one of the blades is the backup server and vCenter manager, and the remaining blade will be used for voice equipment.  I have built a handful of servers (two domain controllers, two file and print servers, one application server, two exchange servers, and one WSUS server.)  All of these servers were joined to the newly created domain and seem to be functioning fine.  The backup server was not on the domain yet so following recommended best practice I decided I would add it to the domain and control the security and other items of vSphere through Active Directory.  This is where things started getting strange.  Up to this point, everything was working great, so I could only assume the configuration was correct.

The moment I joined the server to the domain and rebooted the hosts were showing disconnected.  I also found it strange that I was no longer able to connect using vSphere client from a remote location, but I was able to connect using vSphere on the server locally.  I started investigating, and I setup DNS correctly on the hosts, and the vCenter management server.  I tested that I was able to ping and resolve the servers, everything was working as expected, except the host servers were not responding at all, and showing disconnected in the vSphere client.  The reason that I pursued the idea that resolution was failing at some point was because these were setup prior to the domain and DNS servers, and had been using host files for resolution.  The vCenter server name changed slightly when it was joined to the domain.

The most bizarre part of all of this, is I was able to right click the hosts and choose the connect option, but after about 30 seconds, they would disconnect.  I was also able to perform tasks on the virtual machines *and* the host for the 30 seconds it was connected.

I found several articles online of people experiencing the same problem I was currently facing.  I heard of everything from remove the server from the domain, to reinstall vCenter, a little drastic in my opinion.  To be honest, and I hate to admit this, a peer of mine suggested looking at the Windows firewall and verify that it was off.  I blew that advice off at first, which was my big mistake.  It would have saved me approximately 2 hours of frustration today had I just taken 30 seconds to check the firewall settings.  When the server was built, the firewalls were disabled, but as soon as the server was joined to the domain, the domain profile of the firewall was re-enabled.  This was blocking traffic that was very important for the functionality of vCenter.  Once I disabled this profile, everything magically started working again and the hosts showed “connected” again in the vSphere managment console.  Moral of the story, ALWAYS check the little things.

If you are having intermittent issues with communication between your vCenter manager and host machines, check the windows firewall.

SSL Certificate Format Definition and Converting examples using openssl

This is more for my reference, but I thought I would share the information for those that struggled with this as I did.  The first section defines the different types of formats that can be used for certificates, and when / how / and by who they are used.  The second section provides examples of how to convert between the different formats using openssl.

Certificate Format Definitions

PEM Format

The PEM format is the most common format that Certificate Authorities issue certificates in. PEM certificates usually have extentions such as .pem, .crt, .cer, and .key. They are Base64 encoded ASCII files and contain “—–BEGIN CERTIFICATE—–” and “—–END CERTIFICATE—–” statements. Server certificates, intermediate certificates, and private keys can all be put into the PEM format.

Apache and other similar servers use PEM format certificates. Several PEM certificates, and even the private key, can be included in one file, one below the other, but most platforms, such as Apache, expect the certificates and private key to be in separate files.

DER Format

The DER format is simply a binary form of a certificate instead of the ASCII PEM format. It sometimes has a file extension of .der but it often has a file extension of .cer so the only way to tell the difference between a DER .cer file and a PEM .cer file is to open it in a text editor and look for the BEGIN/END statements. All types of certificates and private keys can be encoded in DER format. DER is typically used with Java platforms. The SSL Converter can only convert certificates to DER format. If you need to convert a private key to DER, please use the OpenSSL commands on this page.

PKCS#7/P7B Format

The PKCS#7 or P7B format is usually stored in Base64 ASCII format and has a file extention of .p7b or .p7c. P7B certificates contain “—–BEGIN PKCS7—–” and “—–END PKCS7—–” statements. A P7B file only contains certificates and chain certificates, not the private key. Several platforms support P7B files including Microsoft Windows and Java Tomcat.

PKCS#12/PFX Format

The PKCS#12 or PFX format is a binary format for storing the server certificate, any intermediate certificates, and the private key in one encryptable file. PFX files usually have extensions such as .pfx and .p12. PFX files are typically used on Windows machines to import and export certificates and private keys.

When converting a PFX file to PEM format, OpenSSL will put all the certificates and the private key into a single file. You will need to open the file in a text editor and copy each certificate and private key (including the BEGIN/END statments) to its own individual text file and save them as certificate.cer, CACert.cer, and privateKey.key respectively.

OpenSSL Command syntax examples

OpenSSL Convert PEM

Convert PEM to DER

openssl x509 -outform der -in certificate.pem -out certificate.der

Convert PEM to P7B

openssl crl2pkcs7 -nocrl -certfile certificate.cer -out certificate.p7b -certfile CACert.cer

Convert PEM to PFX

openssl pkcs12 -export -out certificate.pfx -inkey privateKey.key -in certificate.crt -certfile CACert.crt

OpenSSL Convert DER

Convert DER to PEM

openssl x509 -inform der -in certificate.cer -out certificate.pem

Openssl Convert CRT

Convert CRT to PEM

openssl x509 -in mycert.crt -out mycert.pem -outform PEM

OpenSSL Convert P7B

Convert P7B to PEM

openssl pkcs7 -print_certs -in certificate.p7b -out certificate.cer

Convert P7B to PFX

openssl pkcs7 -print_certs -in certificate.p7b -out certificate.cer

openssl pkcs12 -export -in certificate.cer -inkey privateKey.key -out certificate.pfx -certfile CACert.cer

OpenSSL Convert PFX

Convert PFX to PEM

openssl pkcs12 -in certificate.pfx -out certificate.cer -nodes

This information was taken almost directly from the following link.  Thank you SSLShopper for providing this information.

Create and Apply a certificate for Apache web server

I have been asked several times lately how to generate and deploy certificates for web servers.  I am going to explain here how to set a certificate for an Apache web server.

Most users of Linux / LAMP servers and similar will find themselves needing to generate a certificate request to be submitted to a certificate authority. Because of the complexity of certificates, the various formats, and the fact that some users will want to submit a csr generated by a Unix platform to a Microsoft Certificate authority and then use the resulting certificate on the Unix appliance or server. This example will show how quickly and easily you can generate a certificate request file.

Start by issuing the following command on the console of the LAMP server:

openssl req -new -newkey rsa:2048 -nodes -keyout server.key -out server.csr

A couple of things to note
First, the rsa:2048 value. This can really be whatever you would like, but at the time of writing this was the standard key size. Second, for those that would like to use dsa that will require setting a pass-phrase that will consequently have to be input every time the server is started.

Another thing to note is the names of the files. The generic name [server] was used here, but I have found that it helps keep certificates straight if you just name them after the URL you are generating them for. For example if the website you are creating a certificate for is, then the file names would be and respectively.

Finally, the -nodes command removes the pass-phrase.

This command will generate 2 files a key file or private key and a csr file or certificate request. The private key should be kept very secure. If this file is compromised, the server identity can no longer be verified as accurate and using the private key others will be able to decrypt the data between your server and the clients.

Using the csr file, you can either open the file and copy and paste the contents into the certificate authority or you may be able to simply upload the file to the certificate authority. Once you have generated the certificate and you have it in the right format (normally pem), you can use it and the private key to finish setting up your website to use SSL.

Setting up SSH public / private key authentication in Linux

This is a very generic and basic configuration to setup SSH key authentication for Linux.  I will not go into great detail nor do I assume that this configuration is the most secure or practical.  I am simply identifying the requirements to configure SSH authentication using public and private key pairs. To start you want to make sure that openSSH is installed on your OS.  For Debian based operating systems, this can be as simple as issuing the following command: sudo apt-get update sudo apt-get install openssh-server openssh-client (where openssh-server installs the server and openssh-client installs the client) or review the website for details on installing it. the server is installed, you will want to configure it to use public key authentication as by default it is disabled.  This can be done by editing 2 files /etc/ssh/sshd_config to modify the server settings and /etc/ssh/ssh_config to modify the client settings.  If you plan to use this installation to both connect to other servers and connect to, you will need to modify both of the files. First, the /etc/ssh/sshd_config file will allow changes to be made to the ssh server.  The following lines will need to be added or uncommented based on the original configuration file provided.  I would highly recommend creating a backup copy of the file before making any changes in case something goes horribly wrong.

PubkeyAuthentication yes
AuthorizedKeysFile     %h/.ssh/authorized_keys

While there are other options that I would also configure at this point, they do not relate to the configuration of public key authentication and so will not be discussed here.  This will “enable” authentication using a public / private key exchange and it will direct the server where to look for keys that it will accept.  The “%h” is the variable for the home location, and then the default filename is authorized_keys but really any name can be used here. Restart the ssh server by issuing the following command:

sudo /etc/init.d/ssh restart

Now, we need to set the client.  Open the /etc/ssh/ssh_config file and add or uncomment the following lines (again, I recommend making a backup of this file as well.  This can be as simple as:

cp ssh_config ssh_confg.orig

Once you have your backup, the only line that needs to be changed is:

IdentityFile ~/.ssh/identity

where this provides the system the location of the file to use to identify yourself to the remote server.  In the next step we will create this file, so for now it can be named anything identity is simply a default.  The “~/” refers to the current users home folder (or your folder in this case). Since the identity file doesn’t exist, we need to create it.  To make things easy, I type “~/” and press return to make sure I am in my home directory.  It really isn’t necessary, but when you want to verify the creation of the .ssh folder and the identity file, it is nice to already be here. Issuing the following command will generate the public and private key pair that we will use to identify ourselves to other servers.

ssh-keygen -t dsa

This will walk through a short process and ask you to name your keys.  The default location is fine, but you may want to name your keys something special.  Just note that whatever you set the name to here will need to be also set in the ssh_config file.  So if you use the name “~/.ssh/mykey” for example, this location and name will need to be provided in the ssh_config file so that the ssh client will know where to find the key you would like to use.

Once the key generation is complete (if you didn’t before and you are using the default location) you will have a .ssh folder in your home directory.  It is a dot file and therefore hidden by default, but typing

ls -a

you will be shown the hidden files and folders in your directory.  if you change directory into that folder you will see [your key name] and [your key name].pub.  The [your key name] is the private key, you will want to guard this with your life.  If someone gains access to this key they will be able to access any location / server that where you have uploaded your public key.  Now, the last step to ssh key authentication is to put the public key in the “authorized_keys” file or the file on the server you want to access that verifies the keys for access.

The best way I have found to do this is to connect to the server via ssh and password one last time to upload the public key using the following command.

cat ~/.ssh/ | ssh user@machine "mkdir ~/.ssh; cat >> ~/.ssh/authorized_keys"

If the .ssh directory already exists on the server the alternate to just place the key in the file would be

cat ~/.ssh/ | ssh user@machine "cat >> ~/.ssh/authorized_keys"

where cat reads the file “” (or whatever the file name happens to be) and then it “pipes” it to the ssh session, and finally using cat we append the new key to the authorized_keys file.

Once these steps are complete, and no errors were encountered, you should be able to enter ssh user@machine and authenticate via key.  If you provided a pass-phrase for your key, you will be asked to enter it, otherwise you will be logged on and ready to go.  On Windows platforms using pass-phrases can be frustrating as it will ask for the pass-phrase every time a connection is attempted.  This can be helped by using something like Pageant for PuTTY which will “capture” your pass-phrase for your key and use it when logging on.  You will need to supply the pass-phrase 1 time every time you log on to the Windows computer, but it beats having to repeatedly type it when connecting to a server.

Citrix Receiver, PnaAuthDialog_popup window solution

The Citrix receiver has always been a tremendous thorn in my side to install on Linux, and adding that I have been using 64 bit lately, this is only amplified.  The latest issue came with the receiver actually starting, but along with it a blank window entitled “PnaAuthDialog_popup” is displayed.  It is displayed over the top of the receiver window and it can’t be closed, the windows is set to always on top apparently and nothing happens when it is sitting there.  The windows is movable but that doesn’t help too much.  I found a help page in the internet that suggest adding an extra line in /etc/sysctl.conf with:
…and reboot.
I can’t locate the page anymore to give credit, but this seemed to correct the issue for me.