Sweet, i love rig building. I rarely get to exercise my skills (i work at a computer shop) and nobody wants my advice when building a custom computer. Typically I find that everyone I talk to in person wants to do it themselves if they're going to do it. But i love brainstorming about rig building.
Okay, so i'll start with the motherboard. GIGABYTE isn't bad. I haven't had gigabyte boards die on my like MSI or ASRock boards. However, since you're socket 1155, consider getting the ASUS Sabertooth. It's built with military grade components (which, while it may not be entirely necessary, is awesome as hell), and has the longest warranty ever offered on a motherboard (5 years.)
http://www.newegg.com/Product/Product.aspx?Item=N82E16813131821
Moving on to the solid state. All the articles I've read as of late lead towards the conclusion that the Samsung 8xx series solid states take the game in terms of reliability. There are a few ones that claim higher peak read/write speeds than, like the OCZ vertex 4, but you never actually reach the peak read/write on any SSD. In terms of longevity, i would stick with a Samsung 8xx. Even so, solid states are still new technology, and are prone to error. Make sure you have a backup, or store only your OS on the SSD and data on a hard drive. The SSD i linked has a 3 year warranty, so for 3 years regardless of how many times it dies (if any), you can harass Samsung into replacing it at no cost.
http://www.newegg.com/Product/Product.aspx?Item=N82E16820147189
As for RAM, there's such a small difference in higher speed memory that it's negligible, and 99.9% of the performance boost you notice will be placebo. The only place where higher memory speeds are useful are in memory intensive applications. Things like gaming and programming won't benefit at all from buying memory with increased speed. So to me, it doesn't make sense to buy memory that's rated at a really high speed because it's
so much more expensive. Instead, buy a 16GB (4x4GB) kit of DDR3-1600, which will only cost $80-90, sometimes less if you catch a sale.
As for brand, my personal favorite is Corsair, but kingston, gskill, and pny are equally cool.
Consider this:
http://www.newegg.com/Product/Product.aspx?Item=N82E16820233143
Next is the graphics card. GTX 680 is an absolutely baller card. I would consider switching to the ATI equivalent: Radeon HD 7970. Not because i'm an ATI fanboy, but because the brand XFX manufacturers ATI cards. XFX offers lifetime warranties on all of its graphics cards. I haven't seen that anywhere else.
The EVGA GTX 680 has very very similar benchmarks to an XFX Radeon HD 7970, but EVGA only warrants the card for 3 years. So when that card dies (and it will), you're dead in the water. (Every EVGA card i've ever owned has shit out.)
Consider this: $50 cheaper, 3GB GDDR5 instead, lifetime warranty.
http://www.newegg.com/Product/Product.aspx?Item=N82E16814150586
As for an HDD, you can pretty much grab any HDD you want. I would stick to either Western Digital or Seagate as a brand, though. They're the most highly regarded in the HDD industry atm. Hitachi drives suck the D.
Don't buy a refurbished or recertified drive. They will die on you soon.
WD offers a 5 year warranty on their caviar black series drives. They're worth it.
Another worthy note, be cautious of buying anything over 2TB drives. There is (or was) a problem with controllers not being able to address the entire drive, and larger drives have a considerably higher fail rate. It seems to me that everyone is trying to shove more sectors in a drive when we simply don't have the technology to address sectors that tiny (safely) quite yet.
On other cooling, you won't have to liquid cool to get the performance you want. Running stock, that computer is going to be like lightning man. The only reason I'm currently liquid cooling my build is that i got the liquid cooler from Thermaltake for free, which was awesome.
Also, liquid cooling your video card will cost you another $150-200 to buy the copper cooling block for your specific card. It's a lot of effort for little outcome.
If you do decide to liquid cool i would probably cool only your CPU, and leave your graphics card out of the equation. The Thermaltake Bigwater series liquid cooling units are really good in my opinion. There are also some closed system Corsair ones (Corsair H60) that have the pump on top of the CPU block, and you don't have to use a resevoir or worry about refilling it with coolant because it doesn't evaporate. I've personally never used one of those, but on paper they don't look too bad.
Hope my input helped!
EDIT: (fixed some spelling too)
Also, a note on your power supply. You can have as little as a 600W power supply to run said computer (500W if you go for Radeon HD 7970). Actually, the only reason you should need more than a 600W PSU is if you're running two (or more) video cards at the same time, you're running a really really high end card computing card (like nvidia tesla), or if you're running dozens of HDDs. Save yourself a buck and step down on your PSU power rating. You'll never use 1.2 megawatts, even though "megawatts" sounds so incredibly badass.
I wouldn't consider getting anything above 600W in this scenario, unless you plan to do something crazy in the future.
And contrary to what I've found to be popular belief, a higher rated power supply doesn't make your system any more powerful. The 1200W rating means it can supply a
maximum of 1200W of power. The computer will only consume what it needs.
EDIT 2:
A note on multiple video cards.
You rarely see the performance boosts you want to see when using multiple video cards. With both nvidia's SLI, and it's ATI counterpart's CrossFire, given that you have two identical cards linked together, the max performance change you'll see is roughly +20%, which, to me, is like 1/5th of a card for 100% of the price.
These technologies are prone to bugs too. There was/is a problem called microstutter, where there is an inconsistency in the delay between frames rendered by the different GPUs, resulting in video movement to appear to stutter. Neither GPU can know for certain when the other is done rendering. Loads are split as proportionately as possible, but it's impossible to perfect.
That's not to say everyone has the same issue with SLI/CrossFire.
Just throwing out some observations I've made on the subject.