Colin’s IT, Security and Working Life blog

February 18, 2010

VB Script to delete files older than a few days

Filed under: Uncategorized — chaplic @ 5:35 pm

I had a client that for a variety of reasons moved the Outlook Temporary files to a different folder from the default. It was noticed that outlook wouldn’t always delete it’s tempoary files, so someone had hacked together a little VB script to do the job.

However, it appeared to be buggy and didn’t delete all files. No obvious reason why. Also, if the temp area wasn’t available when it was run, it would error to the screen.

To help, I concocted this script. I’m no VBScript guru, but with looking at examples on the web pulled this together. It majors on letting the user know what it’s doing and trapping errors. The program style isn’t perfect, but it works.

Most importantly of all, we found the bugs in the previous effort; some outlook attachments were marked as read only and therefore not deleted; this script gets round that by forcing deletion – only stopped by permissions problems.

Here it is in all its glory..

Option Explicit

‘ Path to delete
const FolderName ="d:\Temp"
‘ Age of file (last modified) in days before it is deleted
const Days=7

‘Set debug to 1 for screen pops with helpful messages
const debugmode=0

const SUCCESS=0
const ERROR=1
const WARNING=2
dim result
logit "Begining process to delete temp files in " & FolderName & " older than " & Days & " Days",INFORMATION
result = delFiles(FolderName,Days)

Function delFiles(strDir,strDays)
‘ Take in the path to delete and the number of days old a file has to be
‘ Then delete the files in that path if they are older than that date

  dim fso,file,folder,individualFile,foldercontains
  dim strComments
  dim intDeleted, intNotDeleted
  Set fso = CreateObject("Scripting.FileSystemObject")
  on error resume next

  Set folder = fso.GetFolder(strDir)
  if err = 0 then
    strComments = strComments & strDir & " exists." & VBCRLF
    strComments = strComments & ". **ERROR** Accessing folder – cannot find " & strDir & VBCRLF &  err.description & " (" & err.source & ")" & VBCRLF

  end if

  Set foldercontains = folder.Files
  dim intDifferenceinDays
  strComments = strComments & "Deleting Files older than " & strDays & " days" & VBCRLF
  For Each individualFile in folderContains
   ‘ Loop through each file in the folder and check its date
    if debugmode=1 then
                  wscript.echo ("Looking at " & individualfile & VBCRLF & "Which has date last modified of: " & individualFile.datelastmodified _

                                  & VBCRLF & "To see if its " & strDays & " days older than " & Now)
                End if
    strComments = strComments & VBCRLF & "Analysed " & individualfile & ": "
    intDifferenceinDays = DateDiff("d", individualFile.datelastmodified, Now)
    If intDifferenceinDays > strDays Then
                  if debugmode=1 then
                    wscript.echo ("We’ve decided to delete" & file &"… Datediff is" & intDifferenceinDays )
                  End if  
      strComments = strComments & " Deleting…."
      on error resume next
      fso.DeleteFile individualFile,TRUE
      if err = 0 then
                    strComments = strComments & ". SUCCESS"
                    strComments = strComments & ". **ERROR** " & err.description & " (" & err.source & ")"
                  end if
       if debugmode=1 then
                     wscript.echo ("We’ve decided to spare" & file &"… DiD: " & intDifferenceinDays & "  Required: " & strDays)

                   End if
                   strComments = strComments & "Not Deleted. Only " & intDifferenceinDays & " days old"
    End If
  strComments = strComments & VBCRLF & "No of Files Deleted: " & intDeleted & VBCRLF & "ERROR in deleting: " & intNotDeleted

  if intNotDeleted > 0 then
    logit strComments, ERROR
    logit strComments,INFORMATION
  end if
End Function

Function logit(text,level)
‘ Writes a simple message to the windows event log
  dim Wshshell
  set WshShell = CreateObject("WScript.Shell")
  WshShell.LogEvent level, text

end Function


January 28, 2010

Design for an Exchange 2010 Backup

Filed under: Documentation — chaplic @ 6:41 pm

Like most, I’ve been coming to terms with the storage performance requirements (or, lack thereof) in Exchange 2010.

For any previous Exchange deployment (certainly 2003) you’d start with a SAN and use features like snapping to ensure you can backup without affecting performance.

To my mind SANs remain stubbornly expensive for what’s actually delivered (I was just quoted over £3500 for a single 15K 600GB SAS disk which is certified to run in a SAN!).

So the fact I really don’t need one for Exchange 2010 is perfect.

But how do I back up?

Microsoft will tell you they don’t back up at all, just rely on the native protection and deleted items retention.

I’m a little – just a little- less gung-ho than that and I suspect many of my customers are, too.

There’s very little product choice on the market, or indeed much Microsoft collateral about how to backup Exchange 2010, so I thought I’d take a stab at a possible solution myself!

My objectives:

  • Don’t “waste” the genuine data protection tools native to Exchange 2010
  • Prepare for the day when an exchange corruption is replicated to ALL databases (however impossible that might be)
  • Provide a longer-term archive.


Consider the following environment:



We’ve got a single DAG with four copies of every database. Let’s say for argument sake we’ve got 5000 users with a 1GB mailbox. Of course, our disks are directly attached and we’re using nice-and-cheap SATA storage on JBOD. Let’s use 1TB because smaller disks are beer money less.

So far, so good, so like every other Exchange 2010 design piece. We’re leveraging the native protection and we’ve got four copies of the data.

But how to protect against the replicated corruption scenario?





I’m using another new feature of Exchange 2010; lagged replication. So this server in question is always behind the other servers; in theory then should the “replicated corruption” scenario occur, we can take action before it plays into our time delayed mailbox server.

But how long? Too short a delay and the corruption might get missed and played into the lagged database anyway Too long and and invocation of the lagged server might risk losing mail.

My best-guess figure was about 24 hours; this is comparable to a normal restore if we don’t have logfiles.

Now, observant types will have noticed there’s extra disk arrays attached to the lagged mailbox server. To break with custom, these will be RAID5 and their purpose is to act as a file share area to perform backup-to-disk operations. I’m doing disk-to-disk backups because:

I can, at very little infrastructure cost

Having recent backups online is always useful.

At the time of writing, the choice of backup products is underwhelming so I’m going to use the built-in tool. The real downside to this is that I can only backup from the active node, thus I need to be real careful about what I’m backing up, when. Pumping the data across the network in good time might be tricky without the right network setup.

Most likely, one or two databases will get backed up every night with all databases having at least an incremental backup

Now to the final part of the plan; the long-term archive. Hopefully never needed, but your operation might need to keep archives of data (this, probably isn’t the solution for this, you need to check out other new exchange features). But it’s most likely needed when the CEO needs an email he deleted 12 months ago.


Backup-to-tape is therefore meets my need. I’m only going to backup to tape the files produced by the disk-to-disk backup process, and I’m going to choose my timings wisely.

So there-we-have-it. A fairly robust backup architecture? I’m hoping as time progresses and products fill the void (like DPM2010) this solution will look archaic, but for now it’s my best shot at what backup could look like.

January 11, 2010

Search entire domains for service accounts

Filed under: Programs and Scripts — chaplic @ 6:29 pm


Have you ever been in a scenario where you need to change a password on a service account but don’t know what service on what servers use the account? You could pick through audit logs and it still might not tell you if a service hasn’t been restarted recently. Regscan will visit all machines in your domain and give you a list of machines that use that account



Simply enter

regscan account domain [textfile.txt]


  • account is the account you are searching for. Don’t put the domain name first, regscan will pick out either notation from the service list
  • domain is the netbios domain name to search
  • textfile.txt (optional, but reccommended) Specifies a list of servers to search, one per line. In large domains, this is a more reliable method than leaving the program to scan the domain to find machines.


Grab the program here. Let us know how you get on with it.

January 1, 2010

Fixing Windows Update error 80244021

Filed under: Fault Finding — chaplic @ 8:11 pm


Spotted on a couple of my machines, windows update was not working, with the above error:




The Microsoft TechNet article is pretty unhelpful, suggesting the windows update service is having trouble connecting, possibly an on-machine firewall stopping it.

Nothing that should be stopping this springs to mind, so my first concern is malware.  A quick scan by Malwarebytes didn’t show anything; sadly I know that doesn’t guarantee we’re OK. I had a quick look at the host file; nothing changed there. The IP addresses associated with the windowsupdate DNS names appeared to be OK. It did seem as if the PC was being blocked from geting updates.

So, what is actually happening when I click “Get updates” ?

I needed something to let me see behind the lovely chromed update UI. The tool I chose was was fiddler. Mainly used by people debugging websites, it also has the useful knack of sniffing all http traffic from the machine. Let’s fire it up and hit the “try again” button:




We can see the update process requesting the from a server jelly.dessert.local

clearly, the machine in question doesn’t belong to WindowsUpdate. Fortunately, there’s an explanation which is less worrying than some uber-weird virus.

A few weeks ago, I need a couple hundred GBs of disk space for some new VMs in a hurry. Being in a tight spot, I uninstalled WUS which conveniently was taking up about that much space; I then of course changed group policy so that my dozen or so  machines talked to windows update directly

It would appear, however, that a couple of machines have group-policy update issues and never got the update changing from using a local WUS to the microsoft update servers.

So a fairly predictable fix from there on in. But the original fault-finding would be soooo much easier with a little more diagnostic error messages, Microsoft!



December 17, 2009

Debugging Exchange 2010 W3WP High Memory Usage

Filed under: Fault Finding — chaplic @ 10:49 pm


I checked the Wordfish Exchange server and noticed high memory usage; in particular W3WP processes were consuming more memory than store.exe !

A quick Google produced nothing of note; the total memory usage was 1GB. On a server with two heavy users, that seems a tad high! One process was consuming more than 500MB alone.

TaskMan isn’t much help:


So, what next?

With headscratchers like this, Process Explorer is always a good bet.

Let’s fire it up, and look for our W3WP processes:



Let’s open the PID showing high memory usage to see if it gives us a clue:


Aha! This tells us the application pool!

Do we have a memory leak? Not sure, and without any web references we’re flying blind. Let’s jump into IIS, into application pools, and program some thread recycling for the thread:



Let’s recycle the thread now to confirm. Note it’s using 176MB of memory:



The replacement process is consuming less memory



Have I spotted a bug in Exchange IIS? Is this expected? Has my recycling helped? Don’t know yet. Only time till tell!


Technorati Tags: ,





December 4, 2009

FTP Test

Filed under: Programs and Scripts — chaplic @ 4:11 pm


FTPTest is a small application for testing the reliability of FTP servers. You supply it with a file, how many times you want the upload the file and it does the rest.  I wrote it to test the most horrible problem to fix – an intermittent fault.

If your source file was test.txt and you selected to upload to times, you would get testn.txt on the remote FTP server, where n is an increasing number

FTPTest is configured via a small INI file, simply edit this in notepad or similar:

#host – address of FTP server

#directory – what directory to change to after login

#username – what user to login as

#password – what password to use

#origfile – what file to upload. This is copied to
# filenameN where N is an incrementing number depending on howmany

#howmany – number of repetitions of upload

To download the program, click on this link. Be sure to get in touch to say hello if you find it of use!

November 29, 2009

Cisco Syslog Firewall Rules Parser

Filed under: Programs and Scripts — chaplic @ 7:05 pm

Scenario: You’ve got a Cisco ASA Protecting some servers. The ruleset isn’t a tight as you’d like. You know some of the ports, source and destination machines that are in use, but cannot tell exactly what communications are going on.

The cisco is syslogging but it produces verbose text, like this:

009-11-25 18:14:08    Local4.Warning    %ASA-4-106100: access-list InterfaceA_access_in permitted tcp InterfaceA/Server6S009(2326) -> InterfaceB-Intl/ hit-cnt 1 first hit [0xda6858dc, 0xe76db01]
2009-11-25 18:14:09    Local4.Warning    %ASA-4-106100: access-list Outside_access_in permitted udp Outside/ -> InterfaceB-Intl/Server6S002(5560) hit-cnt 1 first hit [0x4429e5e8, 0xed2c2df8]
2009-11-25 18:14:09    Local4.Warning    %ASA-4-106100: access-list InterfaceB-DMZ_access_in permitted udp InterfaceB-Intl/Server6S002(39330) -> Outside/ hit-cnt 1 first hit [0xab98913c, 0x5268eddb]
2009-11-25 18:14:10    Local4.Warning    %ASA-4-106100: access-list Outside_access_in permitted udp Outside/Server5S002(56942) -> InterfaceA/Server6S011(53) hit-cnt 1 first hit [0xa57e4b1c, 0xf0e9804c]
2009-11-25 18:14:11    Local4.Warning    %ASA-4-

Difficult to  pick out what’s going on and get the information you need. You could manually pick through it, or you could tightly configure the ASA to only log the rules and information you’re interested in. Tricky, time consuming and might not be possible if the firewall logging settings cannot be changed.

The solution therefore is a little script to scan the logfiles and pick out the interesting detail, aggregate and present it in a useful format.

I knocked up a little script to do this in Perl; it would be do-able in powershell or VBScript, but I just like the really nice text manipulation features of Perl. I saw it as further proof that any techie worth their salt must be able to knock together scripts to do little jobs like this.

All the script is doing is looking for lines like this

106100: access-list InterfaceB-DMZ_access_in permitted udp InterfaceB-Intl/Server6S002(39330) -> Outside/

From there, it’s pretty straightforward to grab the source server, destination server, protocol and ports used then do some maths on it.

The output of the processing is shown here:

Technorati Tags: ,,



A nicely presented list showing source and destination, port, protocol and how many times it’s appeared in the syslog

To run the tool, from the command line enter:

syslogparser filename.txt

And a file filename.txt.csv will be output.

Get the application here

November 19, 2009

Exchange 2010 install – first thoughts

Filed under: Uncategorized — chaplic @ 4:49 pm


Just upgraded my companies mail server to Exchange 2010. It’s not a large affair, has about 10 mailboxes, 7GB store. The user estate is more forgiving than most, and half of them on holiday, so I had a bit of leeway.

It was previously running Exchange 2007 on a hyper-V VMs on a Dell quad-core server with 10GB of RAM and a handful of other VMs running.

First task was to setup a 2008R2 VM and install Ex2010, both of which completed without incident. Nice to not need to install a gazillion pre-requisites, as it is with Exchange 07 and 2008 vanilla.

At this point I notice performance issues. My Exchange 07 box had 4GB of RAM assigned to it, the Ex10 2GB (all I had left). Interactively the performance was dire, as was web access.

Changing both boxes to 3GB helped slightly – well, it made performance on  both boxes poor. Moral of the story Exchange 200x, even for the most basic applications, needs 4GB of RAM to be acceptable.

Once most things were settled, I decided to go hell-for-leather and retire my Ex07 box. This was possibly the trickiest element of all!

Move mailboxes completed without incident, the 3GB mailbox taking just over two hours.

The Exchange uninstall would crash upon starting; it seems stopping all the services is necessary.

To uninstall the server I needed to remove the public folder store. Try as I might, I couldn’t – after deleting all PFs I could, and taken replicas off, various powershell scripts, still no joy.

So, the hackers tool of last resort? ADSIEdit



I opened the path shown and deleted the reference to the Public Folder. Success! I shut down Ex07 and gave Ex10 the memory it needed. Much better performance!

After some sanity checking and building of internet connectors, I change the NAT and Firewalls rules to swing new email to the Ex10 server.


All Email was bounced with an error message of

“Client host x.y.z.a UnknownDNSName”

I think this was caused by the fact I used OpenDNS.

Turning off the “Enable Forefront DNSBL checking” cured this, and (so far) no noticeable increase in spam



ActiveSync and Outlook anywhere took a bit of work to bring to life. The excellent helped me out with the Outlook Anywhere config errors. Didn’t help with ActiveSync, but sometimes, just sometimes, event log tells you exactly what’s wrong:


After adding the permissions, my iPhone buzzed into life automatically.

Overall, clearly NOT a migration approach suitable for a large scale exchange implementation with high availability requirements, but it was fairly smooth, and to my mind more reminiscent of a Exchange 2000 to 2003 upgrade than “step changes” we saw from 03 to 07 or 5.5 to 2000

Technorati Tags: ,


October 29, 2009

Government Security is quite good – and out to get you.

Filed under: Uncategorized — chaplic @ 11:22 am


Note: repost

The UK Governments Information Assurance Policies (IT Security to you and I) is actually quite good.

There, I said it.

And before someone mentions the thorny issue of CDs in the post, allow me to delve a bit deeper.

Each department is responsible for assessing their own risk and putting countermeasures and functionality in place as they see fit. However, it’s driven from policy from the “centre” meaning there is a commonality across all central government departments.

For the most vital of documents, keeping them confidential, unmolested and available when they are needed is critical.

However, not all data falls into this category and to provide ultimate protection to all data would be considerably expensive and cumbersome. To help with segregation of data, the government uses protective markings.

This is a short term like RESTRICTED or TOP SECRET which is a shorthand to describe what would happen should the information be compromised. Lower markings may just mean some commercial exposure or embarrassment, right up to the compromise of other assets directly leading to loss of life. Labelling documents and systems makes it the value of the data contained within very clear

This probably isn’t directly applicable to most commercial companies. However, if many had a label of, say, “PERSONALLY IDENTIFIABLE INFORMATION” or “COMMERCIALLY SENSITIVE” and clear guidelines as to how information like this should be handled (i.e. do not take a document labelled PERSONALLY IDENTIFIABLE INFORMATION” on a laptop without hard disk encryption), how fewer cases of potential identify theft would we have?

So, the UK Government has a nice labelling system which puts all data in little pots and a bunch of policy documents telling users what they cannot do and a whole host of technical security requirements. Fascinating, but not a compelling reason for your business to get on-board with a structured security methodology?

e-Government is an agenda that’s still quickening pace. You will almost certainly have some customers who are related, or are, a government organisation.

National Government recognises the value of secure communications and is pushing is intranet (the GSi – Government Secure Intranet, and variants) out to partner organisations, quangos, and local councils. To connect up , these bodies have to warrant their systems stand up to Codes of Connection.

If you want to do business with any of these bodies you are going to have to get to grips with these requirements too. Fortunately, the requirements are not arcane, unusual or hidden. They are published on the cabinet office website and called the Security Policy Framework

Let’s quote one requirement that’s poignant here:

Departments and Agencies must have, as a component of their overarching security policy, an information security policy setting out how they, and their delivery partners (including offshore and nearshore (EU/EEA based) Managed Service Providers), comply with the minimum requirements set out in this policy and the wider framework

There’s no escaping it. Expect to see adherence to SPF in your ITT and contractual requirements (if they are not already).

Many companies, if not well-versed in Government IT Security, find the the process alarming when the full implications are realised. They may well have used enough smoke-and-mirrors during the bid phase to hide their lack of knowledge or indeed a poor score in this may not have been enough to lose the bid.

But when they come to deliver, under the full scrutiny of experienced consultants, accreditors and security officers they often find delivering their SPF-related contractual obligations to be daunting (and, expensive).

But all is not lost. This is a scenario where security can truly be a business-enabler for your company.

Firstly, it provides you with carefully thought out, well proven and common set of criteria for your IT security operation. Sometimes, even organisations with pretty tight IT security setups like banks find they do not meet the criteria. It isn’t necessarily a quick fix but a path for your organisation (or, perhaps only a subsection).

To understand how mature your Information Assurance is and how work is progressing, an Information Assurance Maturity Model is available – those who work with CMMi will be in their element.

Secondly, and most importantly – your company will likely want to do business with the government at some point, on some level. Taking these steps now will not only demonstrate the value of security to the business, it will put your company in the driving seat when it comes to delivering these new contracts.

Finally, can a UK government IT Policy catch on and be universally accepted? Well, ITIL isn’t doing to badly!

October 28, 2009

Developing Custom IAG Application Optimisers

Filed under: Techy — chaplic @ 8:46 am

Microsofts Intelligent Application Gateway (IAG) includes a number of built-in application optimisers to secure and protect the applications you want to publish. It also features a high-performance search-and-replace engine which can filter web pages in-line and, coupled with URL rules, you can build your own complex application optimisations.

It’s easy to build custom filters with IAG.

It’s easy to build custom filters with IAG It’s easy and straightforward to create your own rules to meet your own business needs. This article explains how by delving into an example publishing Outlook Web Access (OWA) using labelling/metadata tags.

The Scenario

Like many organisations, makes use of metadata tags to assist with archiving, retention and security policies. These are implemented by the use of a label in the subject line of all emails.  Exchange transport rules add labels if users do not apply them.

Writing a labeled email

Due to the nature of’s business, they wish to prevent some emails being viewed from ‘untrusted’ workstations, for example, web cafes.

Contoso already uses IAG to provide access to OWA 2007. IAG already includes a number of powerful features that allows you to decide what a ‘trusted’ workstation is, for example, a registry key or anti-virus update level.

With Contoso, all IAG access will be from machines deemed untrusted, and therefore the following business rules need to apply:

  • Emails labelled [PERSONAL], [SOCIAL] or [LOWRISK] can be viewed
  • Emails labelled [FINANCIAL] or with no valid label cannot be viewed
The Process

This example takes you through the design, thought process and implementation of an application optimiser to satisfy the above business rules, and assumes a basic familiarity with IAG. The IAG example Virtual Machines available from the Microsoft website are a suitable candidate for following this example.

Step 1: Understand your application

In order to create the rule to enable this functionality, we need to understand how the application operates. The best way to do this is to browse a number of sessions using a network sniffer like WireShark, Netmon, or Fiddler to understand the HTML and related syntax produced.

Let’s take this example of an OWA page:

A labelled email through OWA

A full capture of the underlying HTML is available here; the edited highlights are below:

<!– Copyright (c) 2006 Microsoft Corporation. All rights reserved. –>
<!– OwaPage = ASP.forms_premium_readmessage_aspx –>
<html dir="ltr">
<title>[FINANCIAL] Takeover plans</title>
<body class="rdFrmBody">
<div id=divThm style=display:none _def=8.0.685.24/themes/base/>
<textarea id="txtBdy" class="w100 txtBdy" style="display:none">
&lt;div dir=&quot;ltr&quot;&gt;&lt;font face=&quot;Tahoma&quot; color=&quot;#000000&quot; size=&quot;2&quot;&gt;Let’s go buy litware inc&lt;/font&gt;&lt;/div&gt; &lt;/body&gt; &lt;/html&gt;

The screenshot earlier shows the HTML produced when a page is requested. What we are looking for is a repeatable, common pattern so we can then write a regular expression to define what is acceptable. Examining the HTML syntax above (and testing against a number of scenarios), we can see a pattern forming that would meet our needs:

  • The Subject text is included in the code preceded by <TITLE>
  • There is a consistent marker so show the last line that we’d want to match on: </html>

We also need to decide what we’re replacing the redacted text with. Ideally this should be as similar to the original page as possible to avoid script errors. For simplicity, in this example, the replacement text is to be:

<body class="frmBody">
<div id=divThm style=display:none _def=>
The policy of your organisation does not permit access to this email from this location

Step 2: Define your regular expression

To do this, you need to understand how to construct regular expressions (also called regex or regexps), and there’s lots of guides available on the web.

To test your regexp, copy and paste your HTML grab into your favourite text editor that supports regular expressions to search for text.

In this case, we want to achieve the following:

Search for the body content of a webpage, and if it doesn’t have [PERSONAL], [SOCIAL] or [LOWRISK] at the start of the subject line, then redact the body text (replace the text with something else)

Referring back to Step 1, we note that we want to begin and end the search looking for:


This would translate into a regexp as:


However, this isn’t good enough; it would match ALL pages and mean they would ALL be redacted.

What we have to do now is define the strings that poison the search expression – in other words, if these terms are in the search string then do not match the string.

This is a bit of a leap, so we need to go back to our business rules:

  • Emails labelled [PERSONAL], [SOCIAL] or [LOWRISK] can be viewed
  • Emails labelled [FINANCIAL] or with no valid label cannot be viewed

As it stands, it’s difficult to write a regexp to cover these two rules. However, they could be re-written to say:

  • Do not display any emails unless they are labelled [PERSONAL], [SOCIAL] or [LOWRISK]

This is semantically the same but much easier to write into a regexp as it is one rule. It also ‘fails safe’ because if a new label is used, the page will not be displayed by default.

We now need to write the ‘unless’ part of our regexp. For this, we’ll use the fantastically titled ‘negative forward lookahead’, combined with a wildcard.

The negative-forward lookahead is represented as a ?! and can be thought of as a ‘NOT’. It would look like this:


Which reads as:

Zero or more of any characters apart from [PERSONAL] or [SOCIAL] or [LOWRISK] at the start.

So, putting it all together we get:


Finally, although in IAG we can define what pages this search-and-replace will act upon, customise the search and replace syntax for the exact pages you intend to filter on and, if possible, minimise any mishaps.

If we examine the code, we can see the following:

<!– OwaPage = ASP.forms_premium_readmessage_aspx –>

So with a bit more experimentation, we can come up with:


Step 3: Integrate it with IAG

Now that we’ve built our regular expression, we need to build it into IAG.

On your IAG, load the Editor from the Whale Communications IAG\Additional Tools Menu, then the file


(or WhlFiltSecureRemote_HTTPS.xml if that’s what your portal uses)

To get an example of what we are going to do, search for

<SEARCH encoding="base64">

The first thing you’ll notice is that the search string is garbled: this is encoded in Base64 as it makes life easier because you do not need to escape control characters.

To decipher the text, click your cursor at the start of the encoded text, hold shift then use the right arrow key to select text until the </SEARCH> tag, then click ‘From 64’, as shown on the screenshot below.

Don’t use the mouse to click and drag, because it often helpfully tries to select the end of the text – and fails!

IAG Editor

This then gives you an idea of the syntax required for our search-and-replace instruction:

         <NAME>URL for the pages required</NAME> 
            <SEARCH mode="regexparam" encoding="base64">search regexp – base64 encoded
            <REPLACE encoding="base64">Replacement text base 64 encoded </REPLACE>

So, our example will be:

            <SEARCH mode="regexparam" encoding="base64">.*readmessage.asp.*<title>(?!\[PERSONAL\]|\[SOCIAL\]|\[LOWRISK\]).*</html> </SEARCH>
            <REPLACE encoding="base64"><body class="frmBody"> <div id=divThm style=display:none
_def=></div> The policy of your organisation does not permit access to this email from this location </body></html> </REPLACE>


  • The text above isn’t Base64 encoded yet to aid readability
  • Even though the text is going to be Base64 encoded, it is still necessary to escape the open and close square brackets
  • The <NAME>.*</NAME> defines what path/pagename the search string should match on; in this example, it will attempt to match against ALL OWA pages

Select a location for your search-and-replace instruction and nestle it between an existing

IAG Editor

IAG is very, very unforgiving (and unhelpful) if you get any syntax wrong. One way to protect against this is to load the XML file in Internet Explorer and allow active content. This will highlight some syntactical errors.

IE Helping us to spot bugs

If we pass the IE test, then it’s time to activate the configuration. To do this, we need to activate the IAG configuration, and ensure the ‘Apply Changes made to external configuration settings’ is ticked

Activate the config

You may encounter one of the following errors:

  • IAG Admin Console Message Area reporting failure to save config
  • Invalid index when attempting to view the IAG portal

If so, it’s almost certainly to do with invalid syntax: are you sure you’ve Base64 encoded everything properly?

Now you have ironed out all the syntax errors, it’s time to test. Let’s look at a message that the business rules state we should not be able to see:

Our filtering rule being applied

Note the IE Script error message; this is due to our simplified replacement text. Now, let’s double-click on another prohibited email (the email with subject “Label-less”) and examine the results:

IAG Filtering rule in action

Finally, let’s confirm all is OK by attempting to read the email we should be able to access:

Access to an email

Success! Your information security policy can be complied with, and you can chalk up one more victory with the assistance of IAG.


Although the example above takes a number of design, development and test short-cuts, we can see it’s achievable to write your own application optimisers to meet your own business needs. Wordfish Ltd is a IT consultancy specialising in infrastructure design, novel solutions and web development. If you would like some help with your IAG application optimisers, I’d love to hear from you

« Newer PostsOlder Posts »

Create a free website or blog at