Colin’s IT, Security and Working Life blog

January 28, 2010

Design for an Exchange 2010 Backup

Filed under: Documentation — chaplic @ 6:41 pm

Like most, I’ve been coming to terms with the storage performance requirements (or, lack thereof) in Exchange 2010.

For any previous Exchange deployment (certainly 2003) you’d start with a SAN and use features like snapping to ensure you can backup without affecting performance.

To my mind SANs remain stubbornly expensive for what’s actually delivered (I was just quoted over £3500 for a single 15K 600GB SAS disk which is certified to run in a SAN!).

So the fact I really don’t need one for Exchange 2010 is perfect.

But how do I back up?

Microsoft will tell you they don’t back up at all, just rely on the native protection and deleted items retention.

I’m a little – just a little- less gung-ho than that and I suspect many of my customers are, too.

There’s very little product choice on the market, or indeed much Microsoft collateral about how to backup Exchange 2010, so I thought I’d take a stab at a possible solution myself!

My objectives:

  • Don’t “waste” the genuine data protection tools native to Exchange 2010
  • Prepare for the day when an exchange corruption is replicated to ALL databases (however impossible that might be)
  • Provide a longer-term archive.

 

Consider the following environment:

image

 

We’ve got a single DAG with four copies of every database. Let’s say for argument sake we’ve got 5000 users with a 1GB mailbox. Of course, our disks are directly attached and we’re using nice-and-cheap SATA storage on JBOD. Let’s use 1TB because smaller disks are beer money less.

So far, so good, so like every other Exchange 2010 design piece. We’re leveraging the native protection and we’ve got four copies of the data.

But how to protect against the replicated corruption scenario?

 

 

image

 

I’m using another new feature of Exchange 2010; lagged replication. So this server in question is always behind the other servers; in theory then should the “replicated corruption” scenario occur, we can take action before it plays into our time delayed mailbox server.

But how long? Too short a delay and the corruption might get missed and played into the lagged database anyway Too long and and invocation of the lagged server might risk losing mail.

My best-guess figure was about 24 hours; this is comparable to a normal restore if we don’t have logfiles.

Now, observant types will have noticed there’s extra disk arrays attached to the lagged mailbox server. To break with custom, these will be RAID5 and their purpose is to act as a file share area to perform backup-to-disk operations. I’m doing disk-to-disk backups because:

I can, at very little infrastructure cost

Having recent backups online is always useful.

At the time of writing, the choice of backup products is underwhelming so I’m going to use the built-in tool. The real downside to this is that I can only backup from the active node, thus I need to be real careful about what I’m backing up, when. Pumping the data across the network in good time might be tricky without the right network setup.

Most likely, one or two databases will get backed up every night with all databases having at least an incremental backup

Now to the final part of the plan; the long-term archive. Hopefully never needed, but your operation might need to keep archives of data (this, probably isn’t the solution for this, you need to check out other new exchange features). But it’s most likely needed when the CEO needs an email he deleted 12 months ago.

image

Backup-to-tape is therefore meets my need. I’m only going to backup to tape the files produced by the disk-to-disk backup process, and I’m going to choose my timings wisely.

So there-we-have-it. A fairly robust backup architecture? I’m hoping as time progresses and products fill the void (like DPM2010) this solution will look archaic, but for now it’s my best shot at what backup could look like.

May 18, 2009

Maybe the “perfect security theoretical paradigm” needs work

Filed under: Documentation — chaplic @ 1:03 pm

 

There’s a lot of a talent in the IT and Security industry. However, I tend to find the quality of documentation, bid responses and proposals to be massively variable. OK, that’s being charitable, it’s usually pants!

Here’s my thoughts on successful technical document writing, based on peer reviewing documents and also evaluating tenders

Boilerplate text, content copied off the vendors website and splattered in documents is utterly evil, frustrating and waste of time. Your readers have probably read it already and the language and terminology used won’t be consistent with your document. It sends a message that you are lazy. If you must, edit it appropriately or include it in an appendix.

By the same token, please, please, please don’t write business bullshit – you’ve read it, you know what it is – AVOID!

If you are responding to requirements, include a table in your document which, column-by-column  states the original requirement, then how you have satisfied this (usually just a reference to a section of your document. This serves two purposes – it demonstrates to the reader that you’ve been thorough and also gives you a tool to spot if you have missed anything.

Just like at school, tell me your working and read the question.

I’ve recently just reviewed a number of vendors proposals and lost count of the amount of responses to the requirements that have been answered  what they would like the answer to be (frequently missing responding to some requirements completely).

Also, Don’t just present a solution as a fait accompli. Remember, anyone reading your document may have at least as much technical skill as you, so will have an understanding of different options. Explain to them the reasons why you have gone down the route you have. For every point you make, make sure you defend it.

You should be an absolute guru at Visio! The adage about a picture is absolutely true, it helps break up text and can also explain your thinking to audience members who think visually. Consider a big “everything in” Visio at the start of your document, then take sections of that graphic when you are expanding the solution later.

Most vendors will have Visio stencils available – use them. No matter what kind of document you are writing, presentation is an important facet of getting your message across.

To keep document sizes manageable, I save my visios as .GIFs and import them into word. And whilst we’re on the subject of document size, don’t include large spec lists, code,  or other generic lists of information. Include them in an appendix.

Overall, it doesn’t matter how your technical solution is, it can live or die (or not even be born) depending on how good your documents are – hopefully this will help improve them!

 

Create a free website or blog at WordPress.com.