Colin’s IT, Security and Working Life blog

January 11, 2010

Search entire domains for service accounts

Filed under: Programs and Scripts — chaplic @ 6:29 pm

 

Have you ever been in a scenario where you need to change a password on a service account but don’t know what service on what servers use the account? You could pick through audit logs and it still might not tell you if a service hasn’t been restarted recently. Regscan will visit all machines in your domain and give you a list of machines that use that account

image

Usage

Simply enter

regscan account domain [textfile.txt]

where:

  • account is the account you are searching for. Don’t put the domain name first, regscan will pick out either notation from the service list
  • domain is the netbios domain name to search
  • textfile.txt (optional, but reccommended) Specifies a list of servers to search, one per line. In large domains, this is a more reliable method than leaving the program to scan the domain to find machines.

Download

Grab the program here. Let us know how you get on with it.

January 1, 2010

Fixing Windows Update error 80244021

Filed under: Fault Finding — chaplic @ 8:11 pm

 

Spotted on a couple of my machines, windows update was not working, with the above error:

 

image

 

The Microsoft TechNet article is pretty unhelpful, suggesting the windows update service is having trouble connecting, possibly an on-machine firewall stopping it.

Nothing that should be stopping this springs to mind, so my first concern is malware.  A quick scan by Malwarebytes didn’t show anything; sadly I know that doesn’t guarantee we’re OK. I had a quick look at the host file; nothing changed there. The IP addresses associated with the windowsupdate DNS names appeared to be OK. It did seem as if the PC was being blocked from geting updates.

So, what is actually happening when I click “Get updates” ?

I needed something to let me see behind the lovely chromed update UI. The tool I chose was was fiddler. Mainly used by people debugging websites, it also has the useful knack of sniffing all http traffic from the machine. Let’s fire it up and hit the “try again” button:

 

image

 

We can see the update process requesting the wuident.cab from a server jelly.dessert.local

clearly, the machine in question doesn’t belong to WindowsUpdate. Fortunately, there’s an explanation which is less worrying than some uber-weird virus.

A few weeks ago, I need a couple hundred GBs of disk space for some new VMs in a hurry. Being in a tight spot, I uninstalled WUS which conveniently was taking up about that much space; I then of course changed group policy so that my dozen or so  machines talked to windows update directly

It would appear, however, that a couple of machines have group-policy update issues and never got the update changing from using a local WUS to the microsoft update servers.

So a fairly predictable fix from there on in. But the original fault-finding would be soooo much easier with a little more diagnostic error messages, Microsoft!

 

image

December 17, 2009

Debugging Exchange 2010 W3WP High Memory Usage

Filed under: Fault Finding — chaplic @ 10:49 pm

 

I checked the Wordfish Exchange server and noticed high memory usage; in particular W3WP processes were consuming more memory than store.exe !

A quick Google produced nothing of note; the total memory usage was 1GB. On a server with two heavy users, that seems a tad high! One process was consuming more than 500MB alone.

TaskMan isn’t much help:

image

So, what next?

With headscratchers like this, Process Explorer is always a good bet.

Let’s fire it up, and look for our W3WP processes:

image

 

Let’s open the PID showing high memory usage to see if it gives us a clue:

image

Aha! This tells us the application pool!

Do we have a memory leak? Not sure, and without any web references we’re flying blind. Let’s jump into IIS, into application pools, and program some thread recycling for the thread:

image

 

Let’s recycle the thread now to confirm. Note it’s using 176MB of memory:

image

 

The replacement process is consuming less memory

 

image

Have I spotted a bug in Exchange IIS? Is this expected? Has my recycling helped? Don’t know yet. Only time till tell!

 

Technorati Tags: ,

:

 

 

:

December 4, 2009

FTP Test

Filed under: Programs and Scripts — chaplic @ 4:11 pm

 

FTPTest is a small application for testing the reliability of FTP servers. You supply it with a file, how many times you want the upload the file and it does the rest.  I wrote it to test the most horrible problem to fix – an intermittent fault.

If your source file was test.txt and you selected to upload to times, you would get testn.txt on the remote FTP server, where n is an increasing number

FTPTest is configured via a small INI file, simply edit this in notepad or similar:

#host – address of FTP server
host=www.ftp.com

#directory – what directory to change to after login
directory=

#username – what user to login as
username=aaa

#password – what password to use
password=xx

#origfile – what file to upload. This is copied to
# filenameN where N is an incrementing number depending on howmany
origfile=c:\perlprog\anotherdesk.jpg

#howmany – number of repetitions of upload
howmany=10

To download the program, click on this link. Be sure to get in touch to say hello if you find it of use!

November 29, 2009

Cisco Syslog Firewall Rules Parser

Filed under: Programs and Scripts — chaplic @ 7:05 pm

Scenario: You’ve got a Cisco ASA Protecting some servers. The ruleset isn’t a tight as you’d like. You know some of the ports, source and destination machines that are in use, but cannot tell exactly what communications are going on.

The cisco is syslogging but it produces verbose text, like this:

009-11-25 18:14:08    Local4.Warning    192.168.10.1    %ASA-4-106100: access-list InterfaceA_access_in permitted tcp InterfaceA/Server6S009(2326) -> InterfaceB-Intl/172.16.32.17(443) hit-cnt 1 first hit [0xda6858dc, 0xe76db01]
2009-11-25 18:14:09    Local4.Warning    192.168.10.1    %ASA-4-106100: access-list Outside_access_in permitted udp Outside/172.16.19.83(50088) -> InterfaceB-Intl/Server6S002(5560) hit-cnt 1 first hit [0x4429e5e8, 0xed2c2df8]
2009-11-25 18:14:09    Local4.Warning    192.168.10.1    %ASA-4-106100: access-list InterfaceB-DMZ_access_in permitted udp InterfaceB-Intl/Server6S002(39330) -> Outside/172.16.19.83(50088) hit-cnt 1 first hit [0xab98913c, 0x5268eddb]
2009-11-25 18:14:10    Local4.Warning    192.168.10.1    %ASA-4-106100: access-list Outside_access_in permitted udp Outside/Server5S002(56942) -> InterfaceA/Server6S011(53) hit-cnt 1 first hit [0xa57e4b1c, 0xf0e9804c]
2009-11-25 18:14:11    Local4.Warning    192.168.10.1    %ASA-4-

Difficult to  pick out what’s going on and get the information you need. You could manually pick through it, or you could tightly configure the ASA to only log the rules and information you’re interested in. Tricky, time consuming and might not be possible if the firewall logging settings cannot be changed.

The solution therefore is a little script to scan the logfiles and pick out the interesting detail, aggregate and present it in a useful format.

I knocked up a little script to do this in Perl; it would be do-able in powershell or VBScript, but I just like the really nice text manipulation features of Perl. I saw it as further proof that any techie worth their salt must be able to knock together scripts to do little jobs like this.

All the script is doing is looking for lines like this

106100: access-list InterfaceB-DMZ_access_in permitted udp InterfaceB-Intl/Server6S002(39330) -> Outside/172.16.19.83

From there, it’s pretty straightforward to grab the source server, destination server, protocol and ports used then do some maths on it.

The output of the processing is shown here:

Technorati Tags: ,,

image

 

A nicely presented list showing source and destination, port, protocol and how many times it’s appeared in the syslog

To run the tool, from the command line enter:

syslogparser filename.txt

And a file filename.txt.csv will be output.

Get the application here

November 19, 2009

Exchange 2010 install – first thoughts

Filed under: Uncategorized — chaplic @ 4:49 pm

 

Just upgraded my companies mail server to Exchange 2010. It’s not a large affair, has about 10 mailboxes, 7GB store. The user estate is more forgiving than most, and half of them on holiday, so I had a bit of leeway.

It was previously running Exchange 2007 on a hyper-V VMs on a Dell quad-core server with 10GB of RAM and a handful of other VMs running.

First task was to setup a 2008R2 VM and install Ex2010, both of which completed without incident. Nice to not need to install a gazillion pre-requisites, as it is with Exchange 07 and 2008 vanilla.

At this point I notice performance issues. My Exchange 07 box had 4GB of RAM assigned to it, the Ex10 2GB (all I had left). Interactively the performance was dire, as was web access.

Changing both boxes to 3GB helped slightly – well, it made performance on  both boxes poor. Moral of the story Exchange 200x, even for the most basic applications, needs 4GB of RAM to be acceptable.

Once most things were settled, I decided to go hell-for-leather and retire my Ex07 box. This was possibly the trickiest element of all!

Move mailboxes completed without incident, the 3GB mailbox taking just over two hours.

The Exchange uninstall would crash upon starting; it seems stopping all the services is necessary.

To uninstall the server I needed to remove the public folder store. Try as I might, I couldn’t – after deleting all PFs I could, and taken replicas off, various powershell scripts, still no joy.

So, the hackers tool of last resort? ADSIEdit

image

 

I opened the path shown and deleted the reference to the Public Folder. Success! I shut down Ex07 and gave Ex10 the memory it needed. Much better performance!

After some sanity checking and building of internet connectors, I change the NAT and Firewalls rules to swing new email to the Ex10 server.

EEeeek!

All Email was bounced with an error message of

“Client host x.y.z.a UnknownDNSName”

I think this was caused by the fact I used OpenDNS.

Turning off the “Enable Forefront DNSBL checking” cured this, and (so far) no noticeable increase in spam

image

 

ActiveSync and Outlook anywhere took a bit of work to bring to life. The excellent https://www.testexchangeconnectivity.com/ helped me out with the Outlook Anywhere config errors. Didn’t help with ActiveSync, but sometimes, just sometimes, event log tells you exactly what’s wrong:

image

After adding the permissions, my iPhone buzzed into life automatically.

Overall, clearly NOT a migration approach suitable for a large scale exchange implementation with high availability requirements, but it was fairly smooth, and to my mind more reminiscent of a Exchange 2000 to 2003 upgrade than “step changes” we saw from 03 to 07 or 5.5 to 2000

Technorati Tags: ,

.

October 29, 2009

Government Security is quite good – and out to get you.

Filed under: Uncategorized — chaplic @ 11:22 am

 

Note: repost

The UK Governments Information Assurance Policies (IT Security to you and I) is actually quite good.

There, I said it.

And before someone mentions the thorny issue of CDs in the post, allow me to delve a bit deeper.

Each department is responsible for assessing their own risk and putting countermeasures and functionality in place as they see fit. However, it’s driven from policy from the “centre” meaning there is a commonality across all central government departments.

For the most vital of documents, keeping them confidential, unmolested and available when they are needed is critical.

However, not all data falls into this category and to provide ultimate protection to all data would be considerably expensive and cumbersome. To help with segregation of data, the government uses protective markings.

This is a short term like RESTRICTED or TOP SECRET which is a shorthand to describe what would happen should the information be compromised. Lower markings may just mean some commercial exposure or embarrassment, right up to the compromise of other assets directly leading to loss of life. Labelling documents and systems makes it the value of the data contained within very clear

This probably isn’t directly applicable to most commercial companies. However, if many had a label of, say, “PERSONALLY IDENTIFIABLE INFORMATION” or “COMMERCIALLY SENSITIVE” and clear guidelines as to how information like this should be handled (i.e. do not take a document labelled PERSONALLY IDENTIFIABLE INFORMATION” on a laptop without hard disk encryption), how fewer cases of potential identify theft would we have?

So, the UK Government has a nice labelling system which puts all data in little pots and a bunch of policy documents telling users what they cannot do and a whole host of technical security requirements. Fascinating, but not a compelling reason for your business to get on-board with a structured security methodology?

e-Government is an agenda that’s still quickening pace. You will almost certainly have some customers who are related, or are, a government organisation.

National Government recognises the value of secure communications and is pushing is intranet (the GSi – Government Secure Intranet, and variants) out to partner organisations, quangos, and local councils. To connect up , these bodies have to warrant their systems stand up to Codes of Connection.

If you want to do business with any of these bodies you are going to have to get to grips with these requirements too. Fortunately, the requirements are not arcane, unusual or hidden. They are published on the cabinet office website and called the Security Policy Framework http://www.cabinetoffice.gov.uk/spf.aspx

Let’s quote one requirement that’s poignant here:

Departments and Agencies must have, as a component of their overarching security policy, an information security policy setting out how they, and their delivery partners (including offshore and nearshore (EU/EEA based) Managed Service Providers), comply with the minimum requirements set out in this policy and the wider framework

There’s no escaping it. Expect to see adherence to SPF in your ITT and contractual requirements (if they are not already).

Many companies, if not well-versed in Government IT Security, find the the process alarming when the full implications are realised. They may well have used enough smoke-and-mirrors during the bid phase to hide their lack of knowledge or indeed a poor score in this may not have been enough to lose the bid.

But when they come to deliver, under the full scrutiny of experienced consultants, accreditors and security officers they often find delivering their SPF-related contractual obligations to be daunting (and, expensive).

But all is not lost. This is a scenario where security can truly be a business-enabler for your company.

Firstly, it provides you with carefully thought out, well proven and common set of criteria for your IT security operation. Sometimes, even organisations with pretty tight IT security setups like banks find they do not meet the criteria. It isn’t necessarily a quick fix but a path for your organisation (or, perhaps only a subsection).

To understand how mature your Information Assurance is and how work is progressing, an Information Assurance Maturity Model is available – those who work with CMMi will be in their element.

Secondly, and most importantly – your company will likely want to do business with the government at some point, on some level. Taking these steps now will not only demonstrate the value of security to the business, it will put your company in the driving seat when it comes to delivering these new contracts.

Finally, can a UK government IT Policy catch on and be universally accepted? Well, ITIL isn’t doing to badly!

October 28, 2009

Developing Custom IAG Application Optimisers

Filed under: Techy — chaplic @ 8:46 am
 

Microsofts Intelligent Application Gateway (IAG) includes a number of built-in application optimisers to secure and protect the applications you want to publish. It also features a high-performance search-and-replace engine which can filter web pages in-line and, coupled with URL rules, you can build your own complex application optimisations.

It’s easy to build custom filters with IAG.

It’s easy to build custom filters with IAG It’s easy and straightforward to create your own rules to meet your own business needs. This article explains how by delving into an example publishing Outlook Web Access (OWA) using labelling/metadata tags.

The Scenario

Like many organisations, Contoso.com makes use of metadata tags to assist with archiving, retention and security policies. These are implemented by the use of a label in the subject line of all emails.  Exchange transport rules add labels if users do not apply them.

Writing a labeled email

Due to the nature of Contoso.com’s business, they wish to prevent some emails being viewed from ‘untrusted’ workstations, for example, web cafes.

Contoso already uses IAG to provide access to OWA 2007. IAG already includes a number of powerful features that allows you to decide what a ‘trusted’ workstation is, for example, a registry key or anti-virus update level.

With Contoso, all IAG access will be from machines deemed untrusted, and therefore the following business rules need to apply:

  • Emails labelled [PERSONAL], [SOCIAL] or [LOWRISK] can be viewed
  • Emails labelled [FINANCIAL] or with no valid label cannot be viewed
The Process

This example takes you through the design, thought process and implementation of an application optimiser to satisfy the above business rules, and assumes a basic familiarity with IAG. The IAG example Virtual Machines available from the Microsoft website are a suitable candidate for following this example.

Step 1: Understand your application

In order to create the rule to enable this functionality, we need to understand how the application operates. The best way to do this is to browse a number of sessions using a network sniffer like WireShark, Netmon, or Fiddler to understand the HTML and related syntax produced.

Let’s take this example of an OWA page:

A labelled email through OWA

A full capture of the underlying HTML is available here; the edited highlights are below:

<!– Copyright (c) 2006 Microsoft Corporation. All rights reserved. –>
<!– OwaPage = ASP.forms_premium_readmessage_aspx –>
<html dir="ltr">
<head>
<title>[FINANCIAL] Takeover plans</title>
</head>
<body class="rdFrmBody">
<div id=divThm style=display:none _def=8.0.685.24/themes/base/>
</div>
<textarea id="txtBdy" class="w100 txtBdy" style="display:none">
&lt;div dir=&quot;ltr&quot;&gt;&lt;font face=&quot;Tahoma&quot; color=&quot;#000000&quot; size=&quot;2&quot;&gt;Let’s go buy litware inc&lt;/font&gt;&lt;/div&gt; &lt;/body&gt; &lt;/html&gt;
</textarea>
</div>
</body>
</html>

The screenshot earlier shows the HTML produced when a page is requested. What we are looking for is a repeatable, common pattern so we can then write a regular expression to define what is acceptable. Examining the HTML syntax above (and testing against a number of scenarios), we can see a pattern forming that would meet our needs:

  • The Subject text is included in the code preceded by <TITLE>
  • There is a consistent marker so show the last line that we’d want to match on: </html>

We also need to decide what we’re replacing the redacted text with. Ideally this should be as similar to the original page as possible to avoid script errors. For simplicity, in this example, the replacement text is to be:

<body class="frmBody">
<div id=divThm style=display:none _def=8.1.240.5/themes/base/>
</div>
The policy of your organisation does not permit access to this email from this location
</body>
</html>

Step 2: Define your regular expression

To do this, you need to understand how to construct regular expressions (also called regex or regexps), and there’s lots of guides available on the web.

To test your regexp, copy and paste your HTML grab into your favourite text editor that supports regular expressions to search for text.

In this case, we want to achieve the following:

Search for the body content of a webpage, and if it doesn’t have [PERSONAL], [SOCIAL] or [LOWRISK] at the start of the subject line, then redact the body text (replace the text with something else)

Referring back to Step 1, we note that we want to begin and end the search looking for:

<TITLE>something</html>

This would translate into a regexp as:

<TITLE>.*</html>

However, this isn’t good enough; it would match ALL pages and mean they would ALL be redacted.

What we have to do now is define the strings that poison the search expression – in other words, if these terms are in the search string then do not match the string.

This is a bit of a leap, so we need to go back to our business rules:

  • Emails labelled [PERSONAL], [SOCIAL] or [LOWRISK] can be viewed
  • Emails labelled [FINANCIAL] or with no valid label cannot be viewed

As it stands, it’s difficult to write a regexp to cover these two rules. However, they could be re-written to say:

  • Do not display any emails unless they are labelled [PERSONAL], [SOCIAL] or [LOWRISK]

This is semantically the same but much easier to write into a regexp as it is one rule. It also ‘fails safe’ because if a new label is used, the page will not be displayed by default.

We now need to write the ‘unless’ part of our regexp. For this, we’ll use the fantastically titled ‘negative forward lookahead’, combined with a wildcard.

The negative-forward lookahead is represented as a ?! and can be thought of as a ‘NOT’. It would look like this:

(?![PERSONAL]|[SOCIAL]|[LOWRISK]).*

Which reads as:

Zero or more of any characters apart from [PERSONAL] or [SOCIAL] or [LOWRISK] at the start.

So, putting it all together we get:

<title>(?![PERSONAL]|[SOCIAL]|[LOWRISK]).*</html>

Finally, although in IAG we can define what pages this search-and-replace will act upon, customise the search and replace syntax for the exact pages you intend to filter on and, if possible, minimise any mishaps.

If we examine the code, we can see the following:

<!– OwaPage = ASP.forms_premium_readmessage_aspx –>

So with a bit more experimentation, we can come up with:

.*readmessage.asp.*<title>(?![PERSONAL]|[SOCIAL]|[LOWRISK]).*</html>

Step 3: Integrate it with IAG

Now that we’ve built our regular expression, we need to build it into IAG.

On your IAG, load the Editor from the Whale Communications IAG\Additional Tools Menu, then the file

C:\Whale-Com\e-Gap\von\conf\SRATemplates\WhlFiltSecureRemote_HTTP.xml

(or WhlFiltSecureRemote_HTTPS.xml if that’s what your portal uses)

To get an example of what we are going to do, search for

<SEARCH encoding="base64">

The first thing you’ll notice is that the search string is garbled: this is encoded in Base64 as it makes life easier because you do not need to escape control characters.

To decipher the text, click your cursor at the start of the encoded text, hold shift then use the right arrow key to select text until the </SEARCH> tag, then click ‘From 64’, as shown on the screenshot below.

Don’t use the mouse to click and drag, because it often helpfully tries to select the end of the text – and fails!

IAG Editor

This then gives you an idea of the syntax required for our search-and-replace instruction:

<APPLICATION>
    <APPLICATION_TYPE>application name</APPLICATION_TYPE>
       <URL>
         <NAME>URL for the pages required</NAME> 
            <SEARCH mode="regexparam" encoding="base64">search regexp – base64 encoded
            </SEARCH>
            <REPLACE encoding="base64">Replacement text base 64 encoded </REPLACE>
      </URL>
</APPLICATION>

So, our example will be:

<APPLICATION>
    <APPLICATION_TYPE>owa2007</APPLICATION_TYPE>
       <URL>
         <NAME>.*</NAME> 
            <SEARCH mode="regexparam" encoding="base64">.*readmessage.asp.*<title>(?!\[PERSONAL\]|\[SOCIAL\]|\[LOWRISK\]).*</html> </SEARCH>
            <REPLACE encoding="base64"><body class="frmBody"> <div id=divThm style=display:none
_def=8.1.240.5/themes/base/></div> The policy of your organisation does not permit access to this email from this location </body></html> </REPLACE>
      </URL>
</APPLICATION>

Note:

  • The text above isn’t Base64 encoded yet to aid readability
  • Even though the text is going to be Base64 encoded, it is still necessary to escape the open and close square brackets
  • The <NAME>.*</NAME> defines what path/pagename the search string should match on; in this example, it will attempt to match against ALL OWA pages

Select a location for your search-and-replace instruction and nestle it between an existing
</APPLICATION> and <APPLICATION> tag.

IAG Editor

IAG is very, very unforgiving (and unhelpful) if you get any syntax wrong. One way to protect against this is to load the XML file in Internet Explorer and allow active content. This will highlight some syntactical errors.

IE Helping us to spot bugs

If we pass the IE test, then it’s time to activate the configuration. To do this, we need to activate the IAG configuration, and ensure the ‘Apply Changes made to external configuration settings’ is ticked

Activate the config

You may encounter one of the following errors:

  • IAG Admin Console Message Area reporting failure to save config
  • Invalid index when attempting to view the IAG portal

If so, it’s almost certainly to do with invalid syntax: are you sure you’ve Base64 encoded everything properly?

Now you have ironed out all the syntax errors, it’s time to test. Let’s look at a message that the business rules state we should not be able to see:

Our filtering rule being applied

Note the IE Script error message; this is due to our simplified replacement text. Now, let’s double-click on another prohibited email (the email with subject “Label-less”) and examine the results:

IAG Filtering rule in action

Finally, let’s confirm all is OK by attempting to read the email we should be able to access:

Access to an email

Success! Your information security policy can be complied with, and you can chalk up one more victory with the assistance of IAG.

Conclusion

Although the example above takes a number of design, development and test short-cuts, we can see it’s achievable to write your own application optimisers to meet your own business needs. Wordfish Ltd is a IT consultancy specialising in infrastructure design, novel solutions and web development. If you would like some help with your IAG application optimisers, I’d love to hear from you

October 27, 2009

Providing very secure webmail

Filed under: Government IT Security, Uncategorized — Tags: , — chaplic @ 3:46 pm

 

Most office workers are familiar with the concept of “webmail”. It allows the employee to access their email from any web browser, on any internet connected PC. This gives staff flexibility, may remove the need to supply some staff with a laptop and allows access anytime and anywhere – for example, on holiday (if they are keen). Webmail looks similar to email in the office and allows the user access to their inbox, calendar and attachments.

Technical configuration is fairly straightforward – encryption is provided by the same type of system used to secure web-banking, and users get logged in either by using their office username and password, or occasionally a more sophisticated mechanism like SecurID (little fob with changing numbers). All major email packages include a webmail server and it’s straightforward to configure.

It is cheap to provide, easy to use, and popular with staff.

Some clients cannot accept the risks of providing a “vanilla” webmail solution. Why not?

The stereotypical answer of “security” is often used. But to understand why this answer is used, it’s necessary to look at aspects of a webmail system.

Firstly, the encryption. As the data travels across the public internet and untrusted systems, it’s necessary to encrypt it. This encryption is a flavour of “SSL” or Secure Sockets Layer – websites identified by a padlock and starting with https in a web browser. This is the exact same technology used by online buying and banking.

Whilst for most intents and purposes SSL is pretty secure, some organisations do not consider it secure enough, and if you are a man-in-the-middle you can potentially read encrypted data quite easily.

The other challenge is the “endpoint” – otherwise known as the PC or laptop. With a organisations own PC it’s possible to be reasonably confident that software patching is up-to-date, there’s no malware software installed, and anti-virus is up-to-date. This cannot be claimed of computers that are likely to be used for webmail access.

Computers used for webmail are likely to be home PCs (perhaps crawling with nasties) and public web-cafes. Web-cafes in Airports are a well-know target for people installing keylogging software as they are commonly used by businessmen. Such nasties can capture information and send it back to the attacker. It’s unlikely to be a targeted attack – the malware will be on millions of PCs – but it is unknown what the attacker will do with the information.

The attacker is probably seeking ebay login details or credit card numbers. But, potentially, for that session and maybe beyond, they can access what the user can access via webmail.

Finally, there is also the issue of data remnance. When a web page is loaded, all the information is stored locally on the PC to speed up access. This is especially true if the user accesses an attachment. This information is typically not encrypted (and, there is no way of controlling). Thus, the next user of the machine may very easily find information they are not intended to see.

Predictably, the market has developed solutions. Webmail can be accessed via a number of products all of which can check the endpoint to ensure anti-virus is up-to-date, ensure it passes a number of other tests, and wipe attachments when the session ends. It’s also possible to control what operating system and web browser the user is connecting from, though the PC may spoof this.

It’s also possible to setup filtering based systems where the webmail system either filters emails the user can see based on a label (i.e. do not show this email unless it’s labelled as “UNCLASSIFIED”), or the webmail system is a duplicate of the normal environment, but only containing non-sensitive emails.

Ultimately, the decision to implement such a solution lies with the organisations risk owners. Clearly, they need to be in full possession of the facts, risks and countermeasures. They will also need to support the development process because a novel solution like this is likely to attract attention.

The impact of not providing this facility needs assessed. How many staff effectively do this already by emailing documents to their hotmail account so they can work at home? Recently, transport unions have proposed short-notice 5-day strikes. How would the organisation cope if a key transport route was closed? What would be the impact of not providing this facility to carbon neutral and efficiency savings targets (need to buy 1000’s of people a blackberry or laptop?).

A likely technical solution would have the following aspects:

· A “front end” webserver in a secured (DMZ) network

· Use of best-commercial-grade SSL encryption

· Registration with companies on the internet that allow ongoing and continual penetration testing of websites

· Endpoint checking – a small software component would have to be downloaded to the untrusted endpoints. This checks the machine to ensure software patch levels and antivirus is at acceptable levels, and perhaps the operating system is agreeable. If the endpoint check fails, or cannot be loaded, access is denied

· A high degree of protective monitoring – user access would be closely logged and anomalies alerted (perhaps in real-time)

· There is an element of end-user responsibility therefore terms and conditions would have to be maintained and agreed. It may be necessary for the solution to be “opt in”

· Use of a one-time-password (RSA SecurID) to replace or complement a password

· It may be desired to redact some information or remove some webmail functionality – for example the ability to download or upload attachments

· It may be desirable to control what machines can access the webmail by performing an enrolment procedure and using certificates. This removes the ability to access from any PC but allows access from pre-agreed PCs (e.g. users home PC)

I’m a big fan of the Microsoft IAG product. At its most basic level it’s an SSL VPN, and brings with it the endpoint checking functionality – so we can ensure the client PC is at a certain patch level.

It also allows us to dip into the data being accessed – in real time- and perform filtering based on rules we set. Finally, it sits atop ISA Server 2006 which is a firewall that’s Common Criteria EAL4 evaluated – in other words, it’s a robust firewall.

A simplified solution architecture is shown below:

clip_image004

In conclusion, the effort required achieving this and the friction creating a solution that steps outside the normal security paradigm for a high-security organisation should not be underestimated. Technology to create a robust solution exists and is  commercially heavily used.

Technorati Tags: ,,

September 24, 2009

Government Security is quite good – and out to get you.

Filed under: Government IT Security — chaplic @ 6:17 pm

 

The UK Governments Information Assurance Policies (IT Security to you and I) is actually quite good.

There, I said it.

And before someone mentions the thorny issue of CDs in the post, allow me to delve a bit deeper.

Each department is responsible for assessing their own risk and putting countermeasures and functionality in place as they see fit. However, it’s driven from policy from the “centre” meaning there is a commonality across all central government departments.

For the most vital of documents, keeping them confidential, unmolested and available when they are needed is critical.

However, not all data falls into this category and to provide ultimate protection to all data would be considerably expensive and cumbersome. To help with segregation of data, the government uses protective markings.

This is a short term like RESTRICTED or TOP SECRET which is a shorthand to describe what would happen should the information be compromised. Lower markings may just mean some commercial exposure or embarrassment, right up to the compromise of other assets directly leading to loss of life. Labelling documents and systems makes it the value of the data contained within very clear

This probably isn’t directly applicable to most commercial companies. However, if many had a label of, say, “PERSONALLY IDENTIFIABLE INFORMATION” or “COMMERCIALLY SENSITIVE” and clear guidelines as to how information like this should be handled (i.e. do not take a document labelled PERSONALLY IDENTIFIABLE INFORMATION” on a laptop without hard disk encryption), how fewer cases of potential identify theft would we have?

So, the UK Government has a nice labelling system which puts all data in little pots and a bunch of policy documents telling users what they cannot do and a whole host of technical security requirements. Fascinating, but not a compelling reason for your business to get on-board with a structured security methodology?

e-Government is an agenda that’s still quickening pace. You will almost certainly have some customers who are related, or are, a government organisation.

National Government recognises the value of secure communications and is pushing is intranet (the GSi – Government Secure Intranet, and variants) out to partner organisations, quangos, and local councils. To connect up , these bodies have to warrant their systems stand up to Codes of Connection.

If you want to do business with any of these bodies you are going to have to get to grips with these requirements too. Fortunately, the requirements are not arcane, unusual or hidden. They are published on the cabinet office website and called the Security Policy Framework http://www.cabinetoffice.gov.uk/spf.aspx

Let’s quote one requirement that’s poignant here:

Departments and Agencies must have, as a component of their overarching security policy, an information security policy setting out how they, and their delivery partners (including offshore and nearshore (EU/EEA based) Managed Service Providers), comply with the minimum requirements set out in this policy and the wider framework

There’s no escaping it. Expect to see adherence to SPF in your ITT and contractual requirements (if they are not already).

Many companies, if not well-versed in Government IT Security, find the the process alarming when the full implications are realised. They may well have used enough smoke-and-mirrors during the bid phase to hide their lack of knowledge or indeed a poor score in this may not have been enough to lose the bid.

But when they come to deliver, under the full scrutiny of experienced consultants, accreditors and security officers they often find delivering their SPF-related contractual obligations to be daunting (and, expensive).

But all is not lost. This is a scenario where security can truly be a business-enabler for your company.

Firstly, it provides you with carefully thought out, well proven and common set of criteria for your IT security operation. Sometimes, even organisations with pretty tight IT security setups like banks find they do not meet the criteria. It isn’t necessarily a quick fix but a path for your organisation (or, perhaps only a subsection).

To understand how mature your Information Assurance is and how work is progressing, an Information Assurance Maturity Model is available – those who work with CMMi will be in their element.

Secondly, and most importantly – your company will likely want to do business with the government at some point, on some level. Taking these steps now will not only demonstrate the value of security to the business, it will put your company in the driving seat when it comes to delivering these new contracts.

Finally, can a UK government IT Policy catch on and be universally accepted? Well, ITIL isn’t doing to badly!

« Newer PostsOlder Posts »

Blog at WordPress.com.