PSConfEU

Long absence

I know I haven’t posted in a while. Most of this is beacuse I was preparing my talks for PSConfEU and other events lately.

Besides our regular PPoSh Meetups (which you can find here) I was invited to give a talk for SysOps DevOps Polska about Checklists and how it can be sexy. It was a great experience. I really recommend their meetups. If you’re interested, a recording can be found here. Pawel Jarosz was there too with a tough subject – Windows 2016 and Hyper-converged cluster – more information on his blog.

A few words about PSConfEU

I was attending PSConfEU conference both in 2016 and 2017 and boy, this is the greatest conference one can imagine. Not only about PowerShell – it spans across many areas of interests. For me – this is the most welcome community of heartful people I’ve ever met. No matter their skill level – they’re willing to sit down with you for a while and help you with a problem. You can approach them during the event in Zoo and have a chat about literally anything. There are so many great people there that if I would like to mention them here – I would just have to rewrite list of all the speakers and attendees.

In 2016 I was there with 4 of my co-workers. When we met Jeffrey Snover something clicked.

After the first conference, at the end of 2016 I’ve sent my two talks, hoping that I will have the privilege to go there as a speaker. Well, I wasn’t amongst the speakers. I went there as a delegate and it was even better than in 2016.

(here our trio with an amazing Aleksandar Nikolić)

Together with Tomasz Dabrowski and Pawel Jarosz helped by amazing Kasia Pieter – we’ve started Polish PowerShell User Group. We had our first meetups just before PSConf 2017. When we’ve captured Jeffrey Snover to make a selfie – we hoped that maybe someday he will come visit our small group in Poland.

Well, I’ve tried sending my talks again in 2017. I’m a stubborn person. To my surprise, some Friday afternoon I’ve received an email from Tobias saying – I am invited as a speaker. My two talks – OVF and Release Pipeline – were accepted.

Seems this was a magical point – shortly after this, I was invited to mentioned earlier SysOps DevOps meetup. We were invited to GeekWeekWro as a PPoShGroup and decided we do a Release Pipeline talk together with Tomasz Dabrowski. That was a very good meetup. Lots of great conversations followed.

We’re prepping for PSDay.PL – stay tuned! In the meantime, Jeffrey Snover himself asks us if we have any local user group he could visit. Well after a few tweets – I was very happy to hear that Jeffrey will come to Wroclaw shortly after PS Conf EU 2018. More details here. As a follow up, we’re also in Warsaw with PEPUG – right before Microsoft Tech Summit.

Back to PSConfEU

I am totaly freaked out, nervous yet determined to deliver two talks:

OVF – getting fun from boring tasks- Remote testing, monitoring and reporting the state of your infrastructure can be automated!

And

Release Pipeline – the PPoSh Modules Story – How an average Ops can benefit from mysterious Release Pipeline.

Tomek agreed to do the second talk again with me.

Fingers crossed

Keep your fingers crossed that I won’t screw it up! 🙂

See you at the Best Conference in the World!

Advertisements

Get VM process id

The task

Imagine a standard operation – you need to expand vhdx size. It isn’t hard, right? Just:

  • go to Hyper-V manager,
  • right click the VM,
  • select settings,
  • select vhdx you want to expand,
  • click edit
  • then next
  • then select expand, click next
  • select new size,
  • then next
  • then finish.

Or you ca use PowerShell:

  1. Get VM hard disk location
  2. Resize VHD file

Trouble round the corner

But then the nasty gnome comes. The task doesn’t complete. Within 30 minutes. This is a dynamically expanding disk. Shouldn’t take longer than a few seconds. You try to stop the VM from within the guest OS. No go. You try to turn it off from Hyper-V host level. No go again. VM stays in ‘Stopped-Critical’ state. You want to try and kill the process vmwp.exe that is responsible for this VM. To get that, you first need VM GUID. GUI way is to go to VM folder and check for xml name:

Now, using ProcessExplorer we can add UserName column (View-> Select Columns) and check for given GUID:

Or you can use PowerShell

Not so happy ending

Now, killing vmwp process USSUALY works. It didn’t work this time and caused the vmms (Virtual Machine Management Service) to stuck. Which in the end caused the whole node to go crazy. But that’s another story.

Format Drive. Remotely

Format drive – the Lazy way

Another day, another quicky. A few new VMs running with additional drives that someone forgot to initialize? Considering RDPing to each of those? Or maybe mmc console (DiskMGMT.MSC) and initializing one by one? What about Honolulu?

No worries, PowerShell will do just fine. I love those quickies that can save you a few clicks here and there!

The RAW meat

So, basically formatting a drive requires these four steps:

  1. have a disk with raw partition (uninitialized)
  2. initializing the drive
  3. creating new partition, and assigning a letter to it
  4. finally – formatting the drive

So:

Let’s make it usable

Now, if you’re like me and would like to be able to connect to remote machines with different credentials (LAPS!) and format drives with different labels and file systems – there’s a function for you here:

Or You can grab it as part of PPoSh (Polish PowerShell User Group) Module from GitHub or PowerShell Gallery. There’s more goodies in there.

If you’ve installed it before (Install-Module PPoShTools), just update it (Update-Module PPoShTools).

Hyper-V 2016 S2D Get-StorageJob repair status

The need

So here’s the deal. We’re performing some regular maintenance on our Hyper-V 2016 S2D – i.e. patching. This includes rebooting nodes. While node is not online, Cluster performs storage repair jobs to keep our 3 way healthy. It’s not good to reboot another node while repair job is in progress. To check the state of CSVs I can either use GUI:

or PowerShell:

With this I will see which drive is in degradaded state or repairing.

I can use another cmdlet to get the status of the job:

This on the other hand shows me how’s the repair going, how long tasks are running or how much data is already processed. What I don’t get from here is which job relates to which drive. This can be useful. Imagine you’ve got one repair job that is stuck or taking a long time. I’d like to know which CSV (Virtual Drive) is affected.

The Search

Both objects returned by either Get-StorageJob or Get-VirtualDisk have an object called ObjectID, which looks like this:

Seems like the thing I’m looking. Now I just need to parse the string to get the last guid-like string between { and } and match it with Get-VirtualDisk’s output same position. Let’s use some regex. As I’m new in this area I’ve used this site to get my regex right. Just paste your string and try different matching till you get it right. Seems like this will do the trick:

([A-Za-z0-9]{8}\-?){1}([A-Za-z0-9]{4}\-?){3}([A-Za-z0-9]{12})

Got it – Let’s try it:

And nothing. No output. Verifying both objects, and it seems they differ with one char. StorageJob seems to have +1 on 18th position comparing to VirtualDisk.

Ok, let’s adjust my regex to match new condition:

([A-Za-z0-9]{8}\-?){1}([A-Za-z0-9]{4}){1}

The resolution

Now I know I can corelate repair job to specific CSV. Let’s get some additional date from both commands. I’d like to know which drive is being repaired, the status, percent complete and amount of data.

It’s now just a matter of creating a custom object in a foreach loop:

Running it locally on a cluster node though is not a way I like it. Let’s use Invoke-Command and target the Cluster Owner node for information. Also, let’s add Credential parameter – so I can query cluster from my own workstation without admin privileges. I’ll end up with a function like this:

 

LAPS – Create credential object

Why

In all environments that I manage I have deployed LAPS. I’ve already covered what is LAPS and how to deploy it with ease here.

Now, when I need to connect to remote machines I don’t need to assign my regular or admin account local administrator privileges. I can just use LAPS. Why? If my account has no direct access or privileges on other machines it can’t be easily exploited (think malware, ransomware). This does not protect you in all cases (determined, skilled adversary) but surely adds another layer of protection in your environment.

How

The idea to use that in daily tasks is simple. Assign permissions to query AD for computer password to my admin account. Use that account to retrieve password for specific machine. Create credential object and use it to connect to remote machine. Fairly simple tasks which is repeatable. A great opportunity to create a function for it.

The working code looks something like this:

Let’s put it into function for better use:

Now it’s a matter of:

Clean and easy!

P.S. If you’d like to get al. Computers that already have passwords (and you have permissions to read them), then this might help:

Proper configuration for Virtualized DC VMs

Where’s my time

Simple things are sometimes the trickiest. Once in a while there’s a recurring question – how should you set up time in your domain, if all DCs are virtualized. Undying answer “have one physical box that acts as a primary DC”. My “virtualize everything” nature opposes this. You can have all DCs virtualized in your environment – you just have to do it right.

How it works

I highly recommend these links if you’re interested in this subject:

Just a quick re-cap. When an OS boots up it queries a ‘source’ for current time. In case of physical box ‘source’ will be system clock. Virtual machine though will ask hypervisor for the current time. Then, after VM is completely up, in Active Directory environment it will use domain hierarchy (unless configured differently) to synchronize its clock in regular intervals.

Root cause

What is the issue then? Imagine all your DCs are down, or under a heavy load or your Hyper-V host is under heavy load – it may cause time to shift a little bit. Then a VM with DC role starts and synchronizes time with Hyper-V host – changing its time to inaccurate. Then, suddenly, all machines in your domain have wrong time and bad things happen: Kerberos tickets are out of sync making logins fail, internet services complain about your time, etc.

To resolve this, one can disable the Hyper-V integration component of Time Synchronization:

but that’s not the best idea. Why? Because VM does not have a battery to sustain current clock status when it is powered off. Then, when it starts or resumes its time is not correct. It is desired for a VM to get its time from Hyper-V host. Some people configure Hyper-V hosts as authoritative time source for whole domain, which is violating best practices in Active Directory domain environment.

Resolution

How should it be done then?

All Domain Controllers should be allowed to use Hyper-V integration components during startup,

  1. and only during startup!
  2. Domain Controller with FSMO (PDC Emulator) roles should synchronize time with external source,
  3. All other Domain Controllers should synchronize from the PDC,
  4. All machines should synchronize from any Domain Controller.

I’ve got no time, show me some code

  1. First, let’s make sure our DCs have Time synchronization enabled:

If not, we can easily fix that:

  1. Then add registry entry on all DCs that will stop VM (once booted) from using VM IntegrationComponent Time Provider

Configure PDC Emulator to use external source:

  1. Configure all other DCs to use domain hierarchy:

Once done, you’ll get information that your PDC Emulator is synchronizing with external source:

And your other DCs will synchronize with your PDC

And we’re back on right time track!

Bonus

P.S. Did anyone noticed this little error message?

“VM Integration Services status reports protocol version mismatch on pre-Windows 10 Version 1607 or Windows Server 2016 VM guests” (link)

It just means that my VM is not Windows 2016 running on Windows 2016 Hyper-V Host.

Hyper-V Remove Lingering DVD iso

Mass Dismount

Another day – another dirty – quicky.

So you’ve got a bunch of hosts and some VMs there. Some of those have iso files attached. Some of them shouldn’t. Especially if that ISO is not accessible for all nodes in the cluster.

You can get an error like this

Now, getting vm after vm can be a little overwhelming, right?

We could do a clean sweep and remove all DVDs from ALL Vms, but that’s a little to… Trigger happy.

PowerShell Rocks!

So here’s a oneliner that will query your ClusterNodes, display necessary information in Out-GridView. This will allow us to select only specific VMs and click OK to dismount ISO from their DVD drive. Because Set-VMDvDDrive does not accept pipeline input, we’re doing a foreach-object loop. If you select Cancel – it won’t dismount a thing.

Job Done!