Colin’s IT, Security and Working Life blog

April 12, 2011

Load Balancing VPN connections over multiple ADSL lines.

Filed under: Uncategorized — chaplic @ 12:03 pm

Here’s the scenario: You have a site that has local servers, and for reason outside your control, you cannot get a decent MPLS link in quickly, or similar.

However, you can get a number of ADSL connections in quickly, and users can use their corporate VPN client to reach head office.

But how to balance users across the ADSL lines? You could subnet the network and have each ADSL router as the default gateway. But that’s a lot of network change. You could also use my little technique described below.

The VBScript will read an XML file, throw a dice, and setup some static routes randomly based on the XML file. The static routes will refer to the IP addresses of your VPN endpoints.

The program then drops to shell to use a ROUTE ADD command – note it doesn’t set it permanently. Thus, the program should be set to run via a login script, or similar. Users will need to be a member of  the “Network Configuration Operators” Group.

 

The syntax of the XML is as shown:

<routerandom>
<rtr>
<gateway>IP.OF.FIRST.ADSL</gateway>
<route>ROUTE.OF.VPN.ENDPOINT1 MASK 255.255.255.255</route>
<route>ROUTE.OF.VPN.ENDPOINT2 MASK 255.255.255.255</route>
</rtr>
<rtr>
<route> IP.OF.2ND.ADSL MASK 255.255.255.255</route>
<route>ROUTE.OF.VPN.ENDPOINT1 MASK 255.255.255.255</route>
<route>ROUTE.OF.VPN.ENDPOINT2 MASK 255.255.255.255</route>
<gateway>IP.OF.2ND.ADSL</gateway>
</rtr>
</routerandom>

The tool is quite flexible and reliable. Unfortunately, it’s not as fault-tolerant as I would like,  because (certainly with the cisco VPN client), the software doesn’t fail over to the next-lowest-cost route if an ADSL router fails. So, if an ADSL router dies, the only option is to remove it from the XML file.

 

Code is here, forgive me for it being inside a word doc. RouteRandom.vbs

November 11, 2010

Using a Mac as a Windows Guy

Filed under: Uncategorized — chaplic @ 11:57 am

A few months ago, I needed a new laptop. I wanted something that was fast, sturdy, looked good, good screen and had a good keyboard. I also wanted 8GB of RAM to do virtualisation, which was unusual at the time.

I ended up choosing a mac; when I specced it as I want (an i7 processor, the high res anti glare screen and faster HD) it wasn’t that much more expensive than an equivalent dell. Of course, I need to run windows on it, however, I have dabbled with the Mac side of things and have spent more time in OSX than Win7 at present. This is just a random collection of thoughts based on a few months usage.

 

The hardware itself is excellent – I don’t know if it’s actually more sturdy than any other laptop, but it feels better put together. I do miss the ability to have an internal 3G card, and the USB ports being too close together is annoying, but overall its easily the best hardware I’ve used. It’s quite small which is handy as I usually carry two laptops.

OSX itself I’m not especially blown away with; it’s been a few years since I used a mac in anger. I picked it up quite quickly, and the multi-touch trackpad is excellent, but the rest of it doesn’t blow me away with its super-user-friendliness or anything special. I still struggle to cope with the “menu at the top, application in a window” model of operation, and dislike the windowing maximising (or lack, thereof!)

Being able to drop into unix and find standard unix commands lurking under the surface has been quite handy. Even accounting for the fact I’m more familiar with windows, I think Windows 7 has it in the usability stakes.

As far as reliability goes, I would say they are both equal. I’ve had apps crash on both, and two kernel panics during my use.

I’m finding OSX as a good base to run Virtualbox, I have no evidence but VMs do feel quicker running in OSX rather than windows.

In terms of apps, I’ve found everything I need, apart from Office and Visio. I haven’t invested in MS Office for mac yet, and I think open office & Neo office are lousy handling word documents (I’ve timed it at over a minute to open a word document – unacceptable. I’ll look carefully at office for mac before making my choice but for now continue to get into Windows and Office/ Visio 2010

I’m dealing with the odd keyboard layout easily, my brain seems quite comfortable leaping between the mac keyboard and a normal keyboard.

Overall, what has surprised me is how little the difference is between the two. Any dev work I do is in VMs, and other than the office issues discussed above, I struggle to have a compelling reason to use one over the other. Im sticking with OSX just now, more for the novelty factor but think long term I’ll find myself in Win7 more.

 

June 28, 2010

Getting IT experience – self-taught exercises

Filed under: Uncategorized — chaplic @ 11:47 am

I often get asked “how do I get into IT” or “what’s the best route”. Here’s some advise along these lines, but different from the usual guidance on certs and training.

Below are a series of suggested tasks to get you up-and-running in the IT infrastructure world. Intentionally I’ve not explained every step in great detail, nor included everything you have to do. Nor will performing these tasks make you an expert in these technologies. In fact, one of the goals of the exercise is to get to comfortable and familiar with new technology, googling for information, doing some “try and see” in a safe environment

1. Build yourself a PC
In years gone by, building a PC from components was a good way to get a cheap PC. These days, less so. However, we have particular needs from this PC, and the actual building and fault-finding process will help us along the path. Exact spec is up-to-you, but we need:
• As much RAM as possible (suggest 8GB)
• Processor capable of 64 bit OS and virtualisation
• DVD Drive
Otherwise, it needn’t be the highest spec. You should check all drivers are available in 64 bit versions, however happily it’s very unusual these days for this not to be the case

2. Microsoft Technet Direct
A Microsoft Techet Direct subscription is something every windows techie should have. For just over a hundred pounds a year, it gives you full access to all Microsoft business software, and is great for testing and evaluating –just as we’re doing here. So get yourself a subscription and make the first thing you download Windows 2008R2 as we’re going to build a ..

3. Virtual environment
Now we’ve got a shiny new PC, lets start to do something with it. Burn your Windows 2008 R2 to a DVD and pop it in your machine. Build the OS as you see fit, and have the Hyper-V role installed. We’re going to use that as our virtualisation software. Other than basic software you need to manage the server (I’d also suggest 7-zip is a good tool), you shouldn’t modify the base server. That’s what VMs are for!
First things first, let’s build a basic server image VM. Fire up the Hyper V console and configure it up with settings you think make sense. Copy the Windows 2008 R2 ISO file to the machine and mount that. Turn on your virtual machine and install Windows 2008 R2. When it was finished building, ensure you install the Hyper-V tools.
Close the virtual machine down, and take a copy of the VHD. We’ll use that as a “gold” image to build other hyper-V machines.

4. Build an Active Directory
Our first server is going to be an Active Directory server – this is used by almost all other windows server system components so makes sense to build first. Copy the Gold VM VHD and configure a new VM – I’d give it say 4GB of RAM whilst we’re using the machine and when it’s just running in a steady state reduce the amount of RAM.
Use the NEWSID tool to ensure the machine has a unique identifier.
Installing this will also install DNS – setup forwarding so that it forwards to your ISPs DNS servers.
Decide on the structure of your OUs, where you will put users, computers and servers
Create some users and groups called
• Admins
• Managers
• Staff

5. Install WSUS

This might be a learning environment, but we want to follow best practise! So download WSUS from Microsoft. I’ll leave it for you to decide if you want to install it on the Active Directory Sever, or build a new server to host it.
The next thing to do is to build a GPO to ensure all machines refer to the local WSUS server for updates. Decide on your update strategy both in terms of WSUS approvals and application of patches. I’d be included to have automated approvals and install as much as possible as this is only a trial environment.

6. PC Image
If possible, this should be on a “real PC”. If we don’t have the kit, then a virtual machine will have to do. I’ll leave the operating system choice up to you, but XP is still a valid choice as it’s still used everywhere – although it might have added complexity with your automated deployment tool.
What we’re doing here is building a PC in the anticipation that it’s going to be rolled out to thousands of desktops. So we want the initial install scripted (i.e. automated input of user details, serial number and so on).
Include any drivers that your target machines are likely to need, service packs and patches. Don’t install any software (that will follow)
Then, follow the instructions for preparing the machine for imaging, which will include resetting the SID, wiping the swap files and so on.
You need to decide on a deployment method: RIS or WDS. WDS is the newer technology but there might be a reason you want to choose RIS especially if you have XP as your OS.
Once you have that up-and-running, image a few PCs (virtual or real) and see how you get on.

7. Install Software
Most big companies will have a heavyweight deployment tool to package and deploy software, here we’re going to keep It simple and use the builtin windows software deployment.
Download some Microsoft software (suggest Office and Visio) and configure these packages so AD will deploy it to all PCs (not servers though!)

8. File and Print Server
We want to setup a file share with particular access rights..
This should be
• Admins – Full Control
• Managers – Change
• Staff – Read only
Also, all users should have this drive mapped as their “X: Drive” upon login automatically.
It’s your choice whether to setup another dedicated file server VM or “piggy back” upon another one.
Your next task is to setup a network printer. This should be
configured so that users can connect to \\servername\printername and have drivers for that printer automatically installed. Note if you have a USB printer it may well be easier to share this from the "real" server

9. Exchange
This is a big one! I would actually suggest installing Exchange 2003 as many companies still use it, and migrating away from it is a useful exercise in itself. However, your gold VM image will not be sufficient as Exchange 2003 needs a 32 bit OS.
Build a new VM, Install Exchange 2003 and create exchange mailboxes for your users.
Now here comes the clever bit. We’re going to setup email routing to and from the internet. Go to a provider of dynamic DNS services like dyndns.com and setup a DNS name for your organisation that’s registered against your current connections IP address. Now, also setup an MX record to the same address. You now need to configure your ADSL router/ Cable modem/ etc/ to forward port 25 traffic from the internet to the IP address of your Exchange Server
Automatically create email addresses for your users in the format of name@your-dynamic-dns-entry
Finally you should configure outlook so that it automatically creaes a profile for end users to connect to the their new mailbox.

10. Document
Now that we’ve got a cracking IT infrastructure, let’s have a go at documenting it (OK, we should probably do that first, but hey, this is only an exercise. Fire up visio (downloaded from your technet subscription) and describe your environment. Your diagram should include
• All your servers, names, IP address, function
• Active Directory
• Exchange
• Internet connection
• How mail is routed in an out
• Virtual versus real machines

June 3, 2010

Inventory Audit of a complex IT network

Filed under: Uncategorized — chaplic @ 5:49 pm

I’ve been spending some time doing a complete inventory of a rather complex IT environment, with more firewalls, servers, weird networking and all-round oddness that imaginable. The network has around 10,000 IP devices and has grown fairly organically, with various organisations having been responsible for it’s upkeep – to various levels of quality.

The need was for a “point in time” inventory of what’s out there. A we didn’t have use of a big tool like centinel nor wish to use the existing network management tools (to provide an independent result). Oh, and I had limited admin access rights.

Here’s how I did it

Largely, the work was split into two pieces, a Wintel Audit and an “non Wintel” – encompassing networks, printers, switches, SANs…

The Wintel Audit was fairly easy – I cobbled together a VBScript to query machines using WMI and pull back installed software, machine spec, and so on – just the basic in you might want if you need to take migration decisions. I’ll post the script up in my next blog entry.

The Non-Wintel was more involved. Firstly, i used nmap to scan every IP device. It takes an “educated guess” as to what the device is, and does a reasonable job. The most surprising fact was there was quite a lot wintel kit in here I hadn’t picked up. This was because machines were in different domains and workgroups. These were then added to the wintel audit.

This gave me an outline of what to look for and how to investigate.

There was hundreds of printers on the estate, almost all HP. The nmap tool had done a reasonable job of guessing the type, but it wasn’t precise. To do this,  I fired up HP Jet Direct tools, which is a little light-weight tool that HP no longer provide in this basic form. Shame, because it’s all that’s needed. I just gave the IP addresses relating to HP printers and it went off an queried them. Minutes later, I had netbios names and proper printer models. Lovely.

I didn’t have full access to networking devices, but I did have the SNMP community strings. Therefore I used Billy the Kid’s cisco SNMP tool.

I simply fired in the switches IP address, the community string and the tool got the switches CDP neighbours, helpfully giving me the model names and IP addresses of Layer-2 connected switches. From this, I was not only able to build a network map,  I was able to make the inventory far more accurate.

However, there was an area that was, by design, hidden from view. The client has multiple connections to multiple systems, so has a myriad of firewalls and DMZ’s. I peered through the firewall rulesets to see if I could find equipment on the network that was hidden from ICMP scans. Easy on the Cisco ASDM, Checkpoint FW1 and the Juniper – slightly more complex reading the config of an old PIX! Doing this enabled me to find servers, switches and more Firewalls behind Firewalls.

Then is was just a case of manually picking off the oddities. The nmap scan found lots of Sun boxes, helpfully for me they all revealed their machine name when I FTPd to them,  or finger @machinename. Almost all other devices tell me enough to be useful by connecting via Telnet, SSH, http or https – APC power strips and specialised printers. I even found an plasma screen that talks IP!

The result? An asset list that’s about double the previous list…. and a lot of “must improve housekeeping” to do !

April 26, 2010

Making Live Messenger work with 3 3G Service

Filed under: Uncategorized — chaplic @ 8:16 pm

For some reason, I never have got Live Messenger to work on my laptop (Dell XPS M1530, Windows 7 x64) using “3” 3G connection via a built in 3G card and the Dell Mobile Broadband Card utility. It wasn’t really a concern, until now!

It would try to login, try for a while..

image

Before dying with Error Code: 8000402a:

image

 

Clicking “Get more information” was final insult as all the results were in French. Sacre-Bleu!

image

3 supply a private IP address and there’s been a number of occasions where the web has been dead, but TCP/IP connectivity is fine, therefore I assume they use a “transparent proxy”.  Which I suspected of being at fault. Not much I could do about that, and didn’t fancy calling the call centre whilst I was on the train

Googling was difficult  – When you are a company selling 3G services, calling it “3” isn’t helpful!

The diagnostics built into messenger wasn’t very helpful, apparently I have a Invalid IP address. Works OK for the web.

image

And, in fairness, the IP address arrangement handed down by the 3G connection do seem a bit odd..

image

 

But this lead to to a dead end.. so what now ?

 

  • Firewall… Tried that, no difference
  • Windows Event log – nothing
  • Diagnostics in Live Messenger – too cryptic for me to decypher

My next steps were to fire up wireshark and try and understand what’s going on. But, sometimes inspiration comes from funny places

I decided to see what would happen if I run Live messenger in Admin mode. No difference.

Then I tried compatibility mode, changing it to Windows XP SP3:

image

A UAC pops up, I select “YES”

 

 

And…..drumroll….. SUCESS!!

image

 

All I need to do now is configure UAC not to complain when messenger Launches!

February 18, 2010

VB Script to delete files older than a few days

Filed under: Uncategorized — chaplic @ 5:35 pm

I had a client that for a variety of reasons moved the Outlook Temporary files to a different folder from the default. It was noticed that outlook wouldn’t always delete it’s tempoary files, so someone had hacked together a little VB script to do the job.

However, it appeared to be buggy and didn’t delete all files. No obvious reason why. Also, if the temp area wasn’t available when it was run, it would error to the screen.

To help, I concocted this script. I’m no VBScript guru, but with looking at examples on the web pulled this together. It majors on letting the user know what it’s doing and trapping errors. The program style isn’t perfect, but it works.

Most importantly of all, we found the bugs in the previous effort; some outlook attachments were marked as read only and therefore not deleted; this script gets round that by forcing deletion – only stopped by permissions problems.

Here it is in all its glory..

Option Explicit

‘ Path to delete
const FolderName ="d:\Temp"
‘ Age of file (last modified) in days before it is deleted
const Days=7

‘Set debug to 1 for screen pops with helpful messages
const debugmode=0

const SUCCESS=0
const ERROR=1
const WARNING=2
const INFORMATION=4
dim result
logit "Begining process to delete temp files in " & FolderName & " older than " & Days & " Days",INFORMATION
result = delFiles(FolderName,Days)

Function delFiles(strDir,strDays)
‘ Take in the path to delete and the number of days old a file has to be
‘ Then delete the files in that path if they are older than that date

  dim fso,file,folder,individualFile,foldercontains
  dim strComments
  dim intDeleted, intNotDeleted
  intDeleted=0
  intNotDeleted=0
  Set fso = CreateObject("Scripting.FileSystemObject")
  on error resume next

  Set folder = fso.GetFolder(strDir)
  if err = 0 then
    strComments = strComments & strDir & " exists." & VBCRLF
  else
    strComments = strComments & ". **ERROR** Accessing folder – cannot find " & strDir & VBCRLF &  err.description & " (" & err.source & ")" & VBCRLF

    intNotDeleted=intNotDeleted+1
  end if
  err.Clear 

  Set foldercontains = folder.Files
  dim intDifferenceinDays
  strComments = strComments & "Deleting Files older than " & strDays & " days" & VBCRLF
  For Each individualFile in folderContains
   ‘ Loop through each file in the folder and check its date
    if debugmode=1 then
                  wscript.echo ("Looking at " & individualfile & VBCRLF & "Which has date last modified of: " & individualFile.datelastmodified _

                                  & VBCRLF & "To see if its " & strDays & " days older than " & Now)
                End if
    strComments = strComments & VBCRLF & "Analysed " & individualfile & ": "
    intDifferenceinDays = DateDiff("d", individualFile.datelastmodified, Now)
    If intDifferenceinDays > strDays Then
                  if debugmode=1 then
                    wscript.echo ("We’ve decided to delete" & file &"… Datediff is" & intDifferenceinDays )
                  End if  
      strComments = strComments & " Deleting…."
      on error resume next
      fso.DeleteFile individualFile,TRUE
      if err = 0 then
                    intDeleted=intDeleted+1
                    strComments = strComments & ". SUCCESS"
                  else
                    intNotDeleted=intNotDeleted+1
                    strComments = strComments & ". **ERROR** " & err.description & " (" & err.source & ")"
                  end if
      err.Clear 
     Else
       if debugmode=1 then
                     wscript.echo ("We’ve decided to spare" & file &"… DiD: " & intDifferenceinDays & "  Required: " & strDays)

                   End if
                   strComments = strComments & "Not Deleted. Only " & intDifferenceinDays & " days old"
    End If
  Next
  strComments = strComments & VBCRLF & "No of Files Deleted: " & intDeleted & VBCRLF & "ERROR in deleting: " & intNotDeleted

  if intNotDeleted > 0 then
    logit strComments, ERROR
  else
    logit strComments,INFORMATION
  end if
  delFiles=1
End Function

Function logit(text,level)
‘ Writes a simple message to the windows event log
  dim Wshshell
  set WshShell = CreateObject("WScript.Shell")
  WshShell.LogEvent level, text
  logit=1

end Function

November 19, 2009

Exchange 2010 install – first thoughts

Filed under: Uncategorized — chaplic @ 4:49 pm

 

Just upgraded my companies mail server to Exchange 2010. It’s not a large affair, has about 10 mailboxes, 7GB store. The user estate is more forgiving than most, and half of them on holiday, so I had a bit of leeway.

It was previously running Exchange 2007 on a hyper-V VMs on a Dell quad-core server with 10GB of RAM and a handful of other VMs running.

First task was to setup a 2008R2 VM and install Ex2010, both of which completed without incident. Nice to not need to install a gazillion pre-requisites, as it is with Exchange 07 and 2008 vanilla.

At this point I notice performance issues. My Exchange 07 box had 4GB of RAM assigned to it, the Ex10 2GB (all I had left). Interactively the performance was dire, as was web access.

Changing both boxes to 3GB helped slightly – well, it made performance on  both boxes poor. Moral of the story Exchange 200x, even for the most basic applications, needs 4GB of RAM to be acceptable.

Once most things were settled, I decided to go hell-for-leather and retire my Ex07 box. This was possibly the trickiest element of all!

Move mailboxes completed without incident, the 3GB mailbox taking just over two hours.

The Exchange uninstall would crash upon starting; it seems stopping all the services is necessary.

To uninstall the server I needed to remove the public folder store. Try as I might, I couldn’t – after deleting all PFs I could, and taken replicas off, various powershell scripts, still no joy.

So, the hackers tool of last resort? ADSIEdit

image

 

I opened the path shown and deleted the reference to the Public Folder. Success! I shut down Ex07 and gave Ex10 the memory it needed. Much better performance!

After some sanity checking and building of internet connectors, I change the NAT and Firewalls rules to swing new email to the Ex10 server.

EEeeek!

All Email was bounced with an error message of

“Client host x.y.z.a UnknownDNSName”

I think this was caused by the fact I used OpenDNS.

Turning off the “Enable Forefront DNSBL checking” cured this, and (so far) no noticeable increase in spam

image

 

ActiveSync and Outlook anywhere took a bit of work to bring to life. The excellent https://www.testexchangeconnectivity.com/ helped me out with the Outlook Anywhere config errors. Didn’t help with ActiveSync, but sometimes, just sometimes, event log tells you exactly what’s wrong:

image

After adding the permissions, my iPhone buzzed into life automatically.

Overall, clearly NOT a migration approach suitable for a large scale exchange implementation with high availability requirements, but it was fairly smooth, and to my mind more reminiscent of a Exchange 2000 to 2003 upgrade than “step changes” we saw from 03 to 07 or 5.5 to 2000

Technorati Tags: ,

.

October 29, 2009

Government Security is quite good – and out to get you.

Filed under: Uncategorized — chaplic @ 11:22 am

 

Note: repost

The UK Governments Information Assurance Policies (IT Security to you and I) is actually quite good.

There, I said it.

And before someone mentions the thorny issue of CDs in the post, allow me to delve a bit deeper.

Each department is responsible for assessing their own risk and putting countermeasures and functionality in place as they see fit. However, it’s driven from policy from the “centre” meaning there is a commonality across all central government departments.

For the most vital of documents, keeping them confidential, unmolested and available when they are needed is critical.

However, not all data falls into this category and to provide ultimate protection to all data would be considerably expensive and cumbersome. To help with segregation of data, the government uses protective markings.

This is a short term like RESTRICTED or TOP SECRET which is a shorthand to describe what would happen should the information be compromised. Lower markings may just mean some commercial exposure or embarrassment, right up to the compromise of other assets directly leading to loss of life. Labelling documents and systems makes it the value of the data contained within very clear

This probably isn’t directly applicable to most commercial companies. However, if many had a label of, say, “PERSONALLY IDENTIFIABLE INFORMATION” or “COMMERCIALLY SENSITIVE” and clear guidelines as to how information like this should be handled (i.e. do not take a document labelled PERSONALLY IDENTIFIABLE INFORMATION” on a laptop without hard disk encryption), how fewer cases of potential identify theft would we have?

So, the UK Government has a nice labelling system which puts all data in little pots and a bunch of policy documents telling users what they cannot do and a whole host of technical security requirements. Fascinating, but not a compelling reason for your business to get on-board with a structured security methodology?

e-Government is an agenda that’s still quickening pace. You will almost certainly have some customers who are related, or are, a government organisation.

National Government recognises the value of secure communications and is pushing is intranet (the GSi – Government Secure Intranet, and variants) out to partner organisations, quangos, and local councils. To connect up , these bodies have to warrant their systems stand up to Codes of Connection.

If you want to do business with any of these bodies you are going to have to get to grips with these requirements too. Fortunately, the requirements are not arcane, unusual or hidden. They are published on the cabinet office website and called the Security Policy Framework http://www.cabinetoffice.gov.uk/spf.aspx

Let’s quote one requirement that’s poignant here:

Departments and Agencies must have, as a component of their overarching security policy, an information security policy setting out how they, and their delivery partners (including offshore and nearshore (EU/EEA based) Managed Service Providers), comply with the minimum requirements set out in this policy and the wider framework

There’s no escaping it. Expect to see adherence to SPF in your ITT and contractual requirements (if they are not already).

Many companies, if not well-versed in Government IT Security, find the the process alarming when the full implications are realised. They may well have used enough smoke-and-mirrors during the bid phase to hide their lack of knowledge or indeed a poor score in this may not have been enough to lose the bid.

But when they come to deliver, under the full scrutiny of experienced consultants, accreditors and security officers they often find delivering their SPF-related contractual obligations to be daunting (and, expensive).

But all is not lost. This is a scenario where security can truly be a business-enabler for your company.

Firstly, it provides you with carefully thought out, well proven and common set of criteria for your IT security operation. Sometimes, even organisations with pretty tight IT security setups like banks find they do not meet the criteria. It isn’t necessarily a quick fix but a path for your organisation (or, perhaps only a subsection).

To understand how mature your Information Assurance is and how work is progressing, an Information Assurance Maturity Model is available – those who work with CMMi will be in their element.

Secondly, and most importantly – your company will likely want to do business with the government at some point, on some level. Taking these steps now will not only demonstrate the value of security to the business, it will put your company in the driving seat when it comes to delivering these new contracts.

Finally, can a UK government IT Policy catch on and be universally accepted? Well, ITIL isn’t doing to badly!

October 27, 2009

Providing very secure webmail

Filed under: Government IT Security, Uncategorized — Tags: , — chaplic @ 3:46 pm

 

Most office workers are familiar with the concept of “webmail”. It allows the employee to access their email from any web browser, on any internet connected PC. This gives staff flexibility, may remove the need to supply some staff with a laptop and allows access anytime and anywhere – for example, on holiday (if they are keen). Webmail looks similar to email in the office and allows the user access to their inbox, calendar and attachments.

Technical configuration is fairly straightforward – encryption is provided by the same type of system used to secure web-banking, and users get logged in either by using their office username and password, or occasionally a more sophisticated mechanism like SecurID (little fob with changing numbers). All major email packages include a webmail server and it’s straightforward to configure.

It is cheap to provide, easy to use, and popular with staff.

Some clients cannot accept the risks of providing a “vanilla” webmail solution. Why not?

The stereotypical answer of “security” is often used. But to understand why this answer is used, it’s necessary to look at aspects of a webmail system.

Firstly, the encryption. As the data travels across the public internet and untrusted systems, it’s necessary to encrypt it. This encryption is a flavour of “SSL” or Secure Sockets Layer – websites identified by a padlock and starting with https in a web browser. This is the exact same technology used by online buying and banking.

Whilst for most intents and purposes SSL is pretty secure, some organisations do not consider it secure enough, and if you are a man-in-the-middle you can potentially read encrypted data quite easily.

The other challenge is the “endpoint” – otherwise known as the PC or laptop. With a organisations own PC it’s possible to be reasonably confident that software patching is up-to-date, there’s no malware software installed, and anti-virus is up-to-date. This cannot be claimed of computers that are likely to be used for webmail access.

Computers used for webmail are likely to be home PCs (perhaps crawling with nasties) and public web-cafes. Web-cafes in Airports are a well-know target for people installing keylogging software as they are commonly used by businessmen. Such nasties can capture information and send it back to the attacker. It’s unlikely to be a targeted attack – the malware will be on millions of PCs – but it is unknown what the attacker will do with the information.

The attacker is probably seeking ebay login details or credit card numbers. But, potentially, for that session and maybe beyond, they can access what the user can access via webmail.

Finally, there is also the issue of data remnance. When a web page is loaded, all the information is stored locally on the PC to speed up access. This is especially true if the user accesses an attachment. This information is typically not encrypted (and, there is no way of controlling). Thus, the next user of the machine may very easily find information they are not intended to see.

Predictably, the market has developed solutions. Webmail can be accessed via a number of products all of which can check the endpoint to ensure anti-virus is up-to-date, ensure it passes a number of other tests, and wipe attachments when the session ends. It’s also possible to control what operating system and web browser the user is connecting from, though the PC may spoof this.

It’s also possible to setup filtering based systems where the webmail system either filters emails the user can see based on a label (i.e. do not show this email unless it’s labelled as “UNCLASSIFIED”), or the webmail system is a duplicate of the normal environment, but only containing non-sensitive emails.

Ultimately, the decision to implement such a solution lies with the organisations risk owners. Clearly, they need to be in full possession of the facts, risks and countermeasures. They will also need to support the development process because a novel solution like this is likely to attract attention.

The impact of not providing this facility needs assessed. How many staff effectively do this already by emailing documents to their hotmail account so they can work at home? Recently, transport unions have proposed short-notice 5-day strikes. How would the organisation cope if a key transport route was closed? What would be the impact of not providing this facility to carbon neutral and efficiency savings targets (need to buy 1000’s of people a blackberry or laptop?).

A likely technical solution would have the following aspects:

· A “front end” webserver in a secured (DMZ) network

· Use of best-commercial-grade SSL encryption

· Registration with companies on the internet that allow ongoing and continual penetration testing of websites

· Endpoint checking – a small software component would have to be downloaded to the untrusted endpoints. This checks the machine to ensure software patch levels and antivirus is at acceptable levels, and perhaps the operating system is agreeable. If the endpoint check fails, or cannot be loaded, access is denied

· A high degree of protective monitoring – user access would be closely logged and anomalies alerted (perhaps in real-time)

· There is an element of end-user responsibility therefore terms and conditions would have to be maintained and agreed. It may be necessary for the solution to be “opt in”

· Use of a one-time-password (RSA SecurID) to replace or complement a password

· It may be desired to redact some information or remove some webmail functionality – for example the ability to download or upload attachments

· It may be desirable to control what machines can access the webmail by performing an enrolment procedure and using certificates. This removes the ability to access from any PC but allows access from pre-agreed PCs (e.g. users home PC)

I’m a big fan of the Microsoft IAG product. At its most basic level it’s an SSL VPN, and brings with it the endpoint checking functionality – so we can ensure the client PC is at a certain patch level.

It also allows us to dip into the data being accessed – in real time- and perform filtering based on rules we set. Finally, it sits atop ISA Server 2006 which is a firewall that’s Common Criteria EAL4 evaluated – in other words, it’s a robust firewall.

A simplified solution architecture is shown below:

clip_image004

In conclusion, the effort required achieving this and the friction creating a solution that steps outside the normal security paradigm for a high-security organisation should not be underestimated. Technology to create a robust solution exists and is  commercially heavily used.

Technorati Tags: ,,

September 9, 2009

Microsoft won’t be on EMCs Christmas Card list

Filed under: Uncategorized — chaplic @ 10:08 am

I’ve been helping a client who have email performance issues. Problem is simple enough – most users mailboxes are in the multiple GB range, and there isn’t enough hardware to cope.

It’s all tier-1 hardware – SANs with lots of fast disks in RAID10, mulitple CPU servers. Just nowhere near enough to chew through the TBs of mailboxes and give decent response times.

As part of this we’ve been talking to Microsoft about strategic direction. The environment now is Exchange 2003, so an upgrade to Exchange 2007 with its better performance and memory usage, plus a pretty straightforward upgrade, all seems to be a no-brainer.

I’ve taken a bit of interest in Exchange 2010 and have it runninging in a semi-production environment. I read the blurb about how they have improved I/O further. But it never really occured to me how much of a step-change this new version is.

Basically, disk I/O and resilience are off the table as a concern. Microsofts advise is to forget even RAID, simply use the inbuilt replication technology to have 2,3,4… copies of mailboxes. A single (cheap, sata) disk will service a few hundred mailboxes of the monster size I’m dealing with.

For the first time outlook talks to the CAS server instead of the mailbox direct, which has allowed an easier redirect when a mailbox store goes down.

It’s hard to see why you would ever deploy a SAN for exchange again. In fact, you could arguably jettison a lot of resiliency features of your mailbox servers (dual power supplies, fans).

For many organisations, one mailbox server will be enough, with mutliple servers simply added for resilience (plus our CAS and RG servers of course).

The side effect of the move off the SAN for exchange is that because we dedicated lots of spindles to get decent performance out of exchange, we use a lot of GBs. This space can be set free, reconfigured as RAID5 for filespace or suchlike.

If you’re about to buy extra SAN storage because of email capacity issues, don’t. Go get Exchange 2010.

Older Posts »

Blog at WordPress.com.