Colin’s IT, Security and Working Life blog

July 27, 2012

What your mother never told you about Exchange 2010 Migrations

Filed under: Techy — Tags: — chaplic @ 3:32 pm

Microsoft do a pretty good job of getting knowledge ‘out there’ to us techs; there’s formal documentation, blogs direct from the people that put it together, quick-start documentation – in just about any format you want. Best of all, it’s not hidden away behind a support contract login, It’s one Search away from locating it.

So, if you’re planning an Exchange 2010 migration, you’re probably familiar with the term ‘you had me at ehlo’ and various books with a blue/black cover.

But there’s no substitute for experience, and although no two migrations are ever the same, here’s my top list of my ‘surprises’ from an email migration running into the tens of thousands of mailboxes. You may never encounter them, nor may I again, but maybe, just maybe it’ll save you a 1AM conference call…

1) You really, really need to understand your user profile
Don’t rely on the Microsoft defaults provided with the Calculator. You have, I assume, an Exchange environment already, and  that you can go out there and measure. Once you have these stats, you might find that the idea of hosting 20,000 mailboxes on the old P3 laptop you’ve found in the corner of the office isn’t going to fly. Or more likely, you will find that your initially generous assumptions about deleted item retention and mailbox recovery might need to be trimmed a bit, and logfile disks and required IOPS bumped a little. Or a lot.

2) Firewalls need love, too.

Traditionally, a firewall would be put between the bad guys on the internet, and the internal network, and perhaps some partner organisations. However, in a diverse network arrangement, it’s quite common that there might be a firewall between your internal client machines and your CAS’. Your firewall guys will be wise to the fact that a ‘traditional’ outlook client connection uses MAPI based on RPC, in which we’ll look to use TCP/135 and high ports. So, bang the protocols and destination IP addresses in the firewall, and away we go?!

Not so.

Modern firewalls can determine exactly what is the nature of the RPC traffic and allow/deny access based on the specific nature of the protocol. So they can allow outlook MAPI traffic, but deny the pointing of a compmgmt.msc at your CAS machines. This is done by specifying the UUID of the MAPI communication protocol.

When your client machine initially connects on port 135 there’s a conversation with the server about the desired universally unique identifier your client is looking for. The firewall, being piggy-in-the-middle sees this communication then allows ongoing communication based on it not only ‘liking’ the UUID but also the destination and ports discussed in the connection with the RPC server

Firewalls being things that like order and predictability will then seek to statefully inspect these communications, and make sure everything is just so.

And herein lies some fun.

Your firewall might boast big numbers like “10GBit throughput” but that’s only half the story. Doing such analysis as described above is expensive in terms of firewall resources, and you may find you quickly run out of CPU capacity, and the default-size state table sizes aren’t big enough.  And whilst, we’re here, you might find that one packet in a million isn’t liked by the stateful inspection on the firewall.

3) If you’re migrating from Exchange 2003, you’re really migrating to Exchange 2007 too
I don’t mean you’re doing some kind of painful two step migration. Naturally, a lot of the literature about Exchange 2010 is comparing it to Exchange 2007.

That’s great if that’s your source platform is exchange 2007 but I bet many of you reading this are planning a migration away from Exchange 2003. During your preparations, you should read all the Exchange 2007 upgrade guidance too. Then you might figure things out like:

  • The users will notice a difference when their mailboxes are moved (and I don’t just mean they will get ‘your maibox is full’ messages less frequently. Outlook 2007  (and 2010 of course) are somewhat stifled by Exchange 2003 and doesn’t expose as many features to the users. This includes better free/busy information showing calendar entries (also known as “OMG EVERYONE CAN SEE MY CALENDAR” (you shared it out years ago!) ), modified  out of office and better room booking
  • You’d probably better check before you assure the secretary’s that they can access the boss’ mailboxes no problem no matter who is migrated and who isn’t
  • And – just for fun- let’s stop anyone editing a Distribution List that’s over 5000 entries. The big boss’ PA love this when they want to send out a business critical email (actually, I’m pretty sure that’s not in the 2007 guidance, but I thought Id throw that in there as a freebie for y’all)

4) Storage also needs love
Now, you’re a switched on chap/ lady (you’ve read this far!) so you know that Exchange 2010 is putting to bed the notion that a big, expensive SAN is not necessary and good old DAS is the way forward. That’s great, but it doesn’t always play well in large organisations who have certain ways of doing things and storage teams looking after spinning disk. Plus, with large re-seed times, it can sometimes make sense to avail the services of a SAN.

If your lovely Exchange 2010 databases with their low IO requirements are set to nestle on a SAN, don’t just assume that because you’re using an army of super-expensive disks, all will be well.  These disks connect through a fabric, and a storage controller, which all need to be up to the task of handling at least twice the load (or whatever your DR scenario is)

Sometimes, the more things change, the more they stay the same. Jetstress is still a critical tool in your arsenal whilst testing your change environment. Make sure you plan for it, and use it. It’s possibly a good idea at this moment to have a frank chat about jetstress, and day-to-day Exchange load on a SAN with your storage vendor, because you might find their interpretation of what’s required and what Microsoft produce out of the calculator (which you feed in to JetStress) might differ. Before you have that chat, have a look at the ESRP website, too. This provides paradigms of Storage designs that are certified to work in particular use cases. Chances are it might not fit your environment perfectly, but it provides a goo exemplar of what your design should achieve.

So, a few late nights then?

I’ve been involved with Exchange in one form or other since Exchange 4.0 and it’s probably my favourite Microsoft product. Whilst it is scalable and more robust than ever, the complexity has ratcheted up a few notches too and if nothing else I hope I’ve convinced you that you cannot be resourced, planned and prepared enough when if comes to an Exchange 2010 rollout and migration.

February 6, 2012

Considerations for Scheduling Exchange 2010 Migrations

Filed under: Techy — Tags: — chaplic @ 8:49 pm

So, you’ve designed your shiny new exchange environment, and you’re probably migrating from a creaking Exchange 2003 environment?

In theory, the outage for users takes as long as the size of their mailbox and connection speed. However, most organisations will want to schedule migration in batches probably out-of-hours, to allow for full system backups and operational tests.  It also makes user comms that much simpler.

Obviously it depends on your users work profiles, but if we assume standard office workers and a Monday to Friday working week, how many days to do migration? For the purposes of planning, I’d always include a spare day to allow a rescheduling of a failed migration. So, you could migrate 6 days-a-week, but what about the helpdesk hit when 3 times as many people get their migrated mailbox on a Monday morning?

How do you actually build a schedule for user moves? There are a few choices/ Decisions to be made

 

  • Per Server – start with the first user and first server and keep going till the last user on the last Server. Means you’ll get old servers shut down ASAP. However, the challenge here is how quickly can the old server spit out data? This is likely to be a choke point.

 

  • Per Business Unit – will likely mean a random split of users across all source exchange servers. However.. What if something goes drastically wrong – would you knock an entire department out at once? Depending how good your directory services are, just getting this information could be tricky

 

  • How Many people/ data – Given the speed of your connectivity and source servers (I really hope performance of your new exchange 2010 tin isn’t a concern!) your testing will suggest how much data you can transfer in a single migration window. However, if you could migrate tens of thousands of users in a night as they all had small mailboxes, what’s your appetite for helpdesk calls should even 0.1% of people have problems

Here’s an option that mixes the two approaches.

Firstly, don’t migrate on a per-business-unit approach. This makes scheduling a nightmare, but more to the point introduces the risk of an entire team being knocked out at once. So pick users seemingly at random then communicate with them individually

If you have 5 exchange servers, take one person off each Exchange server and increase your person counter, and also total up their mailbox size. Assuming you don’t reach your limit of number of people to migrate OR maximum data to transfer, then move to the next mailbox server and add the next user.

If you do reach a user or mailbox size limit, then zero your counters and declare a new batch.

Keep adding batches till you have a fixed set, of, say, two weeks s worth of migrations.

This approach should evenly spread the load across all source Exchange servers, and minimises the impact to an individual business area should a problem occur

The logic to actually achieve this is a bit complex. I resolved this issue for a client with a little bit of VBscript that throws all the values needed (mainly from AD)  into an SQL database, then a second bit of VBSCript which produces excel spreadsheets with the user details on them.

Decide how much advance warning you want to give users, 2 weeks from initial communication is probably enough.

14 days before you plan to migrate your first batch, email the users and let them know, inviting then to reschedule if inconvenient.

If you’re increasing users mailbox quotas, be sure to tell them this on the email, because for them it’s probably the biggest feature!

The next day, email your second batch of users, the next day your third batch, and so on.

Obviously, you will need follow-up reminder emails closer to the time. I’d recommend informing users you need at least 48 hours to cancel scheduled migrations.  OK, so we all know that’s not strictly true and important cancellations will still happen at the last minute but hopefully it will reduce the number of last-minute migrations.

Keep the messages clear, short and snappy, but direct users to an intranet site for further explanation and a wiki/ FAQ site.

 

 

 

 

April 12, 2011

Load Balancing VPN connections over multiple ADSL lines.

Filed under: Uncategorized — chaplic @ 12:03 pm

Here’s the scenario: You have a site that has local servers, and for reason outside your control, you cannot get a decent MPLS link in quickly, or similar.

However, you can get a number of ADSL connections in quickly, and users can use their corporate VPN client to reach head office.

But how to balance users across the ADSL lines? You could subnet the network and have each ADSL router as the default gateway. But that’s a lot of network change. You could also use my little technique described below.

The VBScript will read an XML file, throw a dice, and setup some static routes randomly based on the XML file. The static routes will refer to the IP addresses of your VPN endpoints.

The program then drops to shell to use a ROUTE ADD command – note it doesn’t set it permanently. Thus, the program should be set to run via a login script, or similar. Users will need to be a member of  the “Network Configuration Operators” Group.

 

The syntax of the XML is as shown:

<routerandom>
<rtr>
<gateway>IP.OF.FIRST.ADSL</gateway>
<route>ROUTE.OF.VPN.ENDPOINT1 MASK 255.255.255.255</route>
<route>ROUTE.OF.VPN.ENDPOINT2 MASK 255.255.255.255</route>
</rtr>
<rtr>
<route> IP.OF.2ND.ADSL MASK 255.255.255.255</route>
<route>ROUTE.OF.VPN.ENDPOINT1 MASK 255.255.255.255</route>
<route>ROUTE.OF.VPN.ENDPOINT2 MASK 255.255.255.255</route>
<gateway>IP.OF.2ND.ADSL</gateway>
</rtr>
</routerandom>

The tool is quite flexible and reliable. Unfortunately, it's not as fault-tolerant as I would like,  because (certainly with the cisco VPN client), the software doesn't fail over to the next-lowest-cost route if an ADSL router fails. So, if an ADSL router dies, the only option is to remove it from the XML file.

 

Code is here, forgive me for it being inside a word doc. RouteRandom.vbs

November 11, 2010

Using a Mac as a Windows Guy

Filed under: Uncategorized — chaplic @ 11:57 am

A few months ago, I needed a new laptop. I wanted something that was fast, sturdy, looked good, good screen and had a good keyboard. I also wanted 8GB of RAM to do virtualisation, which was unusual at the time.

I ended up choosing a mac; when I specced it as I want (an i7 processor, the high res anti glare screen and faster HD) it wasn’t that much more expensive than an equivalent dell. Of course, I need to run windows on it, however, I have dabbled with the Mac side of things and have spent more time in OSX than Win7 at present. This is just a random collection of thoughts based on a few months usage.

 

The hardware itself is excellent – I don’t know if it’s actually more sturdy than any other laptop, but it feels better put together. I do miss the ability to have an internal 3G card, and the USB ports being too close together is annoying, but overall its easily the best hardware I’ve used. It’s quite small which is handy as I usually carry two laptops.

OSX itself I’m not especially blown away with; it’s been a few years since I used a mac in anger. I picked it up quite quickly, and the multi-touch trackpad is excellent, but the rest of it doesn’t blow me away with its super-user-friendliness or anything special. I still struggle to cope with the “menu at the top, application in a window” model of operation, and dislike the windowing maximising (or lack, thereof!)

Being able to drop into unix and find standard unix commands lurking under the surface has been quite handy. Even accounting for the fact I’m more familiar with windows, I think Windows 7 has it in the usability stakes.

As far as reliability goes, I would say they are both equal. I’ve had apps crash on both, and two kernel panics during my use.

I’m finding OSX as a good base to run Virtualbox, I have no evidence but VMs do feel quicker running in OSX rather than windows.

In terms of apps, I’ve found everything I need, apart from Office and Visio. I haven’t invested in MS Office for mac yet, and I think open office & Neo office are lousy handling word documents (I’ve timed it at over a minute to open a word document – unacceptable. I’ll look carefully at office for mac before making my choice but for now continue to get into Windows and Office/ Visio 2010

I’m dealing with the odd keyboard layout easily, my brain seems quite comfortable leaping between the mac keyboard and a normal keyboard.

Overall, what has surprised me is how little the difference is between the two. Any dev work I do is in VMs, and other than the office issues discussed above, I struggle to have a compelling reason to use one over the other. Im sticking with OSX just now, more for the novelty factor but think long term I’ll find myself in Win7 more.

 

August 6, 2010

Thinking the unthinkable – changes to government IA security architecture

Filed under: Government IT Security — chaplic @ 12:42 pm

I’ve said before, government information security is pretty good. We’ve had leaks and data losses in the past – noticeably “low tech” problems. In terms of issues in the public domain involving technology, there’s a pretty good story to tell.

However, times are different now, there is no money left. There’s a lot of security controls in place to mitigate against risks. Is it time to accept some of these risks and pare down the controls ? Let’s look at what can be done, comparing against a normal large business as our “sanity control”. Similarly, most businesses do not have to deal with “life and death” information, so I’m not considering that classification of information.

Government policy will not allow a wide area network to be run without complex encryption over the top. So, whilst most companies buy MPLS from the likes of Cable and Wireless and BT, government will do the same, then overlay a complex and expensive VPN. Removing this as a mandatory requirement would reduce costs in future, and even for currently deployed networks there’s a need to support all these extra boxes (and give them power, cooling). Plus, removing the extra encryption would improve speeds! As long as the migration cost is less than the support cost, everyone wins!

There’s a requirement for hard-disk encryption. Most corporates have woken up to this as an issue, and central government is no different. However, rather than effectively mandating a few, expensively approved products, perhaps the use of common commercial alternatives would save tens of pounds per machine.

VPN is another common business operation. Again, common in Government but mainly done with exotic VPN products you’ve never heard of. Ditch this, and go with Juniper and Cisco that everyone else uses. Many corporates will provide webmail for their employees. This will allow employees to access their email, probably from their home PC. This might alleviate the need for a blackberry, laptop and so on. You just won’t see this on a central government system. So, provide this and see mobile comms costs tumble.

Each government department is an autonomous organisation. They are joined up via the “Government Secure intranet”. This is a private WAN used to ship email and allow access to each others private websites. For email, if you’re feeling bold you could enforce TLS between your partners, or have a select few use PGP. But use the internet like everyone else. And when business want to share information, they setup VPNs over the internet. Do all this, and scrap the GSi

You’ll note none of these suggestions are fundamental. I’m not suggesting everyone run linux, or some sort of single, unified IT system. Mainly because change == cost, and drastic change == lots of cost.

However, there has to be a downside, and that is risk. Our attackers will have an easier ride, and those who seek to get at our information will have more success. As cyber-terrorism becomes a reality, would we be setting ourselves us for attack? At the more trivial end there’s bound to be stories about Nigerian scammers getting into government accounts.

The controls that are in place are not there because some security nerd wanted to install the latest gizmo. The question is therefore, is there anyone senior enough to take these decisions and also genuinely accept the risks and guaranteed issues?

June 28, 2010

Getting IT experience – self-taught exercises

Filed under: Uncategorized — chaplic @ 11:47 am

I often get asked “how do I get into IT” or “what’s the best route”. Here’s some advise along these lines, but different from the usual guidance on certs and training.

Below are a series of suggested tasks to get you up-and-running in the IT infrastructure world. Intentionally I’ve not explained every step in great detail, nor included everything you have to do. Nor will performing these tasks make you an expert in these technologies. In fact, one of the goals of the exercise is to get to comfortable and familiar with new technology, googling for information, doing some “try and see” in a safe environment

1. Build yourself a PC
In years gone by, building a PC from components was a good way to get a cheap PC. These days, less so. However, we have particular needs from this PC, and the actual building and fault-finding process will help us along the path. Exact spec is up-to-you, but we need:
• As much RAM as possible (suggest 8GB)
• Processor capable of 64 bit OS and virtualisation
• DVD Drive
Otherwise, it needn’t be the highest spec. You should check all drivers are available in 64 bit versions, however happily it’s very unusual these days for this not to be the case

2. Microsoft Technet Direct
A Microsoft Techet Direct subscription is something every windows techie should have. For just over a hundred pounds a year, it gives you full access to all Microsoft business software, and is great for testing and evaluating –just as we’re doing here. So get yourself a subscription and make the first thing you download Windows 2008R2 as we’re going to build a ..

3. Virtual environment
Now we’ve got a shiny new PC, lets start to do something with it. Burn your Windows 2008 R2 to a DVD and pop it in your machine. Build the OS as you see fit, and have the Hyper-V role installed. We’re going to use that as our virtualisation software. Other than basic software you need to manage the server (I’d also suggest 7-zip is a good tool), you shouldn’t modify the base server. That’s what VMs are for!
First things first, let’s build a basic server image VM. Fire up the Hyper V console and configure it up with settings you think make sense. Copy the Windows 2008 R2 ISO file to the machine and mount that. Turn on your virtual machine and install Windows 2008 R2. When it was finished building, ensure you install the Hyper-V tools.
Close the virtual machine down, and take a copy of the VHD. We’ll use that as a “gold” image to build other hyper-V machines.

4. Build an Active Directory
Our first server is going to be an Active Directory server – this is used by almost all other windows server system components so makes sense to build first. Copy the Gold VM VHD and configure a new VM – I’d give it say 4GB of RAM whilst we’re using the machine and when it’s just running in a steady state reduce the amount of RAM.
Use the NEWSID tool to ensure the machine has a unique identifier.
Installing this will also install DNS – setup forwarding so that it forwards to your ISPs DNS servers.
Decide on the structure of your OUs, where you will put users, computers and servers
Create some users and groups called
• Admins
• Managers
• Staff

5. Install WSUS

This might be a learning environment, but we want to follow best practise! So download WSUS from Microsoft. I’ll leave it for you to decide if you want to install it on the Active Directory Sever, or build a new server to host it.
The next thing to do is to build a GPO to ensure all machines refer to the local WSUS server for updates. Decide on your update strategy both in terms of WSUS approvals and application of patches. I’d be included to have automated approvals and install as much as possible as this is only a trial environment.

6. PC Image
If possible, this should be on a “real PC”. If we don’t have the kit, then a virtual machine will have to do. I’ll leave the operating system choice up to you, but XP is still a valid choice as it’s still used everywhere – although it might have added complexity with your automated deployment tool.
What we’re doing here is building a PC in the anticipation that it’s going to be rolled out to thousands of desktops. So we want the initial install scripted (i.e. automated input of user details, serial number and so on).
Include any drivers that your target machines are likely to need, service packs and patches. Don’t install any software (that will follow)
Then, follow the instructions for preparing the machine for imaging, which will include resetting the SID, wiping the swap files and so on.
You need to decide on a deployment method: RIS or WDS. WDS is the newer technology but there might be a reason you want to choose RIS especially if you have XP as your OS.
Once you have that up-and-running, image a few PCs (virtual or real) and see how you get on.

7. Install Software
Most big companies will have a heavyweight deployment tool to package and deploy software, here we’re going to keep It simple and use the builtin windows software deployment.
Download some Microsoft software (suggest Office and Visio) and configure these packages so AD will deploy it to all PCs (not servers though!)

8. File and Print Server
We want to setup a file share with particular access rights..
This should be
• Admins – Full Control
• Managers – Change
• Staff – Read only
Also, all users should have this drive mapped as their “X: Drive” upon login automatically.
It’s your choice whether to setup another dedicated file server VM or “piggy back” upon another one.
Your next task is to setup a network printer. This should be
configured so that users can connect to \\servername\printername and have drivers for that printer automatically installed. Note if you have a USB printer it may well be easier to share this from the "real" server

9. Exchange
This is a big one! I would actually suggest installing Exchange 2003 as many companies still use it, and migrating away from it is a useful exercise in itself. However, your gold VM image will not be sufficient as Exchange 2003 needs a 32 bit OS.
Build a new VM, Install Exchange 2003 and create exchange mailboxes for your users.
Now here comes the clever bit. We’re going to setup email routing to and from the internet. Go to a provider of dynamic DNS services like dyndns.com and setup a DNS name for your organisation that’s registered against your current connections IP address. Now, also setup an MX record to the same address. You now need to configure your ADSL router/ Cable modem/ etc/ to forward port 25 traffic from the internet to the IP address of your Exchange Server
Automatically create email addresses for your users in the format of name@your-dynamic-dns-entry
Finally you should configure outlook so that it automatically creaes a profile for end users to connect to the their new mailbox.

10. Document
Now that we’ve got a cracking IT infrastructure, let’s have a go at documenting it (OK, we should probably do that first, but hey, this is only an exercise. Fire up visio (downloaded from your technet subscription) and describe your environment. Your diagram should include
• All your servers, names, IP address, function
• Active Directory
• Exchange
• Internet connection
• How mail is routed in an out
• Virtual versus real machines

June 3, 2010

Inventory Audit of a complex IT network

Filed under: Uncategorized — chaplic @ 5:49 pm

I’ve been spending some time doing a complete inventory of a rather complex IT environment, with more firewalls, servers, weird networking and all-round oddness that imaginable. The network has around 10,000 IP devices and has grown fairly organically, with various organisations having been responsible for it’s upkeep – to various levels of quality.

The need was for a “point in time” inventory of what’s out there. A we didn’t have use of a big tool like centinel nor wish to use the existing network management tools (to provide an independent result). Oh, and I had limited admin access rights.

Here’s how I did it

Largely, the work was split into two pieces, a Wintel Audit and an “non Wintel” – encompassing networks, printers, switches, SANs…

The Wintel Audit was fairly easy – I cobbled together a VBScript to query machines using WMI and pull back installed software, machine spec, and so on – just the basic in you might want if you need to take migration decisions. I’ll post the script up in my next blog entry.

The Non-Wintel was more involved. Firstly, i used nmap to scan every IP device. It takes an “educated guess” as to what the device is, and does a reasonable job. The most surprising fact was there was quite a lot wintel kit in here I hadn’t picked up. This was because machines were in different domains and workgroups. These were then added to the wintel audit.

This gave me an outline of what to look for and how to investigate.

There was hundreds of printers on the estate, almost all HP. The nmap tool had done a reasonable job of guessing the type, but it wasn’t precise. To do this,  I fired up HP Jet Direct tools, which is a little light-weight tool that HP no longer provide in this basic form. Shame, because it’s all that’s needed. I just gave the IP addresses relating to HP printers and it went off an queried them. Minutes later, I had netbios names and proper printer models. Lovely.

I didn’t have full access to networking devices, but I did have the SNMP community strings. Therefore I used Billy the Kid’s cisco SNMP tool.

I simply fired in the switches IP address, the community string and the tool got the switches CDP neighbours, helpfully giving me the model names and IP addresses of Layer-2 connected switches. From this, I was not only able to build a network map,  I was able to make the inventory far more accurate.

However, there was an area that was, by design, hidden from view. The client has multiple connections to multiple systems, so has a myriad of firewalls and DMZ’s. I peered through the firewall rulesets to see if I could find equipment on the network that was hidden from ICMP scans. Easy on the Cisco ASDM, Checkpoint FW1 and the Juniper – slightly more complex reading the config of an old PIX! Doing this enabled me to find servers, switches and more Firewalls behind Firewalls.

Then is was just a case of manually picking off the oddities. The nmap scan found lots of Sun boxes, helpfully for me they all revealed their machine name when I FTPd to them,  or finger @machinename. Almost all other devices tell me enough to be useful by connecting via Telnet, SSH, http or https – APC power strips and specialised printers. I even found an plasma screen that talks IP!

The result? An asset list that’s about double the previous list…. and a lot of “must improve housekeeping” to do !

April 26, 2010

Making Live Messenger work with 3 3G Service

Filed under: Uncategorized — chaplic @ 8:16 pm

For some reason, I never have got Live Messenger to work on my laptop (Dell XPS M1530, Windows 7 x64) using “3” 3G connection via a built in 3G card and the Dell Mobile Broadband Card utility. It wasn’t really a concern, until now!

It would try to login, try for a while..

image

Before dying with Error Code: 8000402a:

image

 

Clicking “Get more information” was final insult as all the results were in French. Sacre-Bleu!

image

3 supply a private IP address and there’s been a number of occasions where the web has been dead, but TCP/IP connectivity is fine, therefore I assume they use a “transparent proxy”.  Which I suspected of being at fault. Not much I could do about that, and didn’t fancy calling the call centre whilst I was on the train

Googling was difficult  – When you are a company selling 3G services, calling it “3” isn’t helpful!

The diagnostics built into messenger wasn’t very helpful, apparently I have a Invalid IP address. Works OK for the web.

image

And, in fairness, the IP address arrangement handed down by the 3G connection do seem a bit odd..

image

 

But this lead to to a dead end.. so what now ?

 

  • Firewall… Tried that, no difference
  • Windows Event log – nothing
  • Diagnostics in Live Messenger – too cryptic for me to decypher

My next steps were to fire up wireshark and try and understand what’s going on. But, sometimes inspiration comes from funny places

I decided to see what would happen if I run Live messenger in Admin mode. No difference.

Then I tried compatibility mode, changing it to Windows XP SP3:

image

A UAC pops up, I select “YES”

 

 

And…..drumroll….. SUCESS!!

image

 

All I need to do now is configure UAC not to complain when messenger Launches!

February 18, 2010

VB Script to delete files older than a few days

Filed under: Uncategorized — chaplic @ 5:35 pm

I had a client that for a variety of reasons moved the Outlook Temporary files to a different folder from the default. It was noticed that outlook wouldn’t always delete it’s tempoary files, so someone had hacked together a little VB script to do the job.

However, it appeared to be buggy and didn’t delete all files. No obvious reason why. Also, if the temp area wasn’t available when it was run, it would error to the screen.

To help, I concocted this script. I’m no VBScript guru, but with looking at examples on the web pulled this together. It majors on letting the user know what it’s doing and trapping errors. The program style isn’t perfect, but it works.

Most importantly of all, we found the bugs in the previous effort; some outlook attachments were marked as read only and therefore not deleted; this script gets round that by forcing deletion – only stopped by permissions problems.

Here it is in all its glory..

Option Explicit

‘ Path to delete
const FolderName ="d:\Temp"
‘ Age of file (last modified) in days before it is deleted
const Days=7

‘Set debug to 1 for screen pops with helpful messages
const debugmode=0

const SUCCESS=0
const ERROR=1
const WARNING=2
const INFORMATION=4
dim result
logit "Begining process to delete temp files in " & FolderName & " older than " & Days & " Days",INFORMATION
result = delFiles(FolderName,Days)

Function delFiles(strDir,strDays)
‘ Take in the path to delete and the number of days old a file has to be
‘ Then delete the files in that path if they are older than that date

  dim fso,file,folder,individualFile,foldercontains
  dim strComments
  dim intDeleted, intNotDeleted
  intDeleted=0
  intNotDeleted=0
  Set fso = CreateObject("Scripting.FileSystemObject")
  on error resume next

  Set folder = fso.GetFolder(strDir)
  if err = 0 then
    strComments = strComments & strDir & " exists." & VBCRLF
  else
    strComments = strComments & ". **ERROR** Accessing folder – cannot find " & strDir & VBCRLF &  err.description & " (" & err.source & ")" & VBCRLF

    intNotDeleted=intNotDeleted+1
  end if
  err.Clear 

  Set foldercontains = folder.Files
  dim intDifferenceinDays
  strComments = strComments & "Deleting Files older than " & strDays & " days" & VBCRLF
  For Each individualFile in folderContains
   ‘ Loop through each file in the folder and check its date
    if debugmode=1 then
                  wscript.echo ("Looking at " & individualfile & VBCRLF & "Which has date last modified of: " & individualFile.datelastmodified _

                                  & VBCRLF & "To see if its " & strDays & " days older than " & Now)
                End if
    strComments = strComments & VBCRLF & "Analysed " & individualfile & ": "
    intDifferenceinDays = DateDiff("d", individualFile.datelastmodified, Now)
    If intDifferenceinDays > strDays Then
                  if debugmode=1 then
                    wscript.echo ("We’ve decided to delete" & file &"… Datediff is" & intDifferenceinDays )
                  End if  
      strComments = strComments & " Deleting…."
      on error resume next
      fso.DeleteFile individualFile,TRUE
      if err = 0 then
                    intDeleted=intDeleted+1
                    strComments = strComments & ". SUCCESS"
                  else
                    intNotDeleted=intNotDeleted+1
                    strComments = strComments & ". **ERROR** " & err.description & " (" & err.source & ")"
                  end if
      err.Clear 
     Else
       if debugmode=1 then
                     wscript.echo ("We’ve decided to spare" & file &"… DiD: " & intDifferenceinDays & "  Required: " & strDays)

                   End if
                   strComments = strComments & "Not Deleted. Only " & intDifferenceinDays & " days old"
    End If
  Next
  strComments = strComments & VBCRLF & "No of Files Deleted: " & intDeleted & VBCRLF & "ERROR in deleting: " & intNotDeleted

  if intNotDeleted > 0 then
    logit strComments, ERROR
  else
    logit strComments,INFORMATION
  end if
  delFiles=1
End Function

Function logit(text,level)
‘ Writes a simple message to the windows event log
  dim Wshshell
  set WshShell = CreateObject("WScript.Shell")
  WshShell.LogEvent level, text
  logit=1

end Function

January 28, 2010

Design for an Exchange 2010 Backup

Filed under: Documentation — chaplic @ 6:41 pm

Like most, I’ve been coming to terms with the storage performance requirements (or, lack thereof) in Exchange 2010.

For any previous Exchange deployment (certainly 2003) you’d start with a SAN and use features like snapping to ensure you can backup without affecting performance.

To my mind SANs remain stubbornly expensive for what’s actually delivered (I was just quoted over £3500 for a single 15K 600GB SAS disk which is certified to run in a SAN!).

So the fact I really don’t need one for Exchange 2010 is perfect.

But how do I back up?

Microsoft will tell you they don’t back up at all, just rely on the native protection and deleted items retention.

I’m a little – just a little- less gung-ho than that and I suspect many of my customers are, too.

There’s very little product choice on the market, or indeed much Microsoft collateral about how to backup Exchange 2010, so I thought I’d take a stab at a possible solution myself!

My objectives:

  • Don’t “waste” the genuine data protection tools native to Exchange 2010
  • Prepare for the day when an exchange corruption is replicated to ALL databases (however impossible that might be)
  • Provide a longer-term archive.

 

Consider the following environment:

image

 

We’ve got a single DAG with four copies of every database. Let’s say for argument sake we’ve got 5000 users with a 1GB mailbox. Of course, our disks are directly attached and we’re using nice-and-cheap SATA storage on JBOD. Let’s use 1TB because smaller disks are beer money less.

So far, so good, so like every other Exchange 2010 design piece. We’re leveraging the native protection and we’ve got four copies of the data.

But how to protect against the replicated corruption scenario?

 

 

image

 

I’m using another new feature of Exchange 2010; lagged replication. So this server in question is always behind the other servers; in theory then should the “replicated corruption” scenario occur, we can take action before it plays into our time delayed mailbox server.

But how long? Too short a delay and the corruption might get missed and played into the lagged database anyway Too long and and invocation of the lagged server might risk losing mail.

My best-guess figure was about 24 hours; this is comparable to a normal restore if we don’t have logfiles.

Now, observant types will have noticed there’s extra disk arrays attached to the lagged mailbox server. To break with custom, these will be RAID5 and their purpose is to act as a file share area to perform backup-to-disk operations. I’m doing disk-to-disk backups because:

I can, at very little infrastructure cost

Having recent backups online is always useful.

At the time of writing, the choice of backup products is underwhelming so I’m going to use the built-in tool. The real downside to this is that I can only backup from the active node, thus I need to be real careful about what I’m backing up, when. Pumping the data across the network in good time might be tricky without the right network setup.

Most likely, one or two databases will get backed up every night with all databases having at least an incremental backup

Now to the final part of the plan; the long-term archive. Hopefully never needed, but your operation might need to keep archives of data (this, probably isn’t the solution for this, you need to check out other new exchange features). But it’s most likely needed when the CEO needs an email he deleted 12 months ago.

image

Backup-to-tape is therefore meets my need. I’m only going to backup to tape the files produced by the disk-to-disk backup process, and I’m going to choose my timings wisely.

So there-we-have-it. A fairly robust backup architecture? I’m hoping as time progresses and products fill the void (like DPM2010) this solution will look archaic, but for now it’s my best shot at what backup could look like.

Older Posts »

The Silver is the New Black Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.