Quote:
Originally posted by BlueSkyJaunte
SQLSvr2k5 (or whatever) would introduce a whole raft of other problems. All DMBSs have their quirks.
That said, we use Oracle for our real-time, high-volume stuff. I'm talking multiple terabytes. Probably into the petabytes in some cases.
|
Yep... I hear ya. I'm currently contracted as the DB architect of the Oracle cluster that's feeding EA's new SportsWorld (
http://www.easportsworld.com), the real-time on-line integration for all EA Sports games (Madden, Fifa, etc) on XBox, PS3, etc.
In my heavily-biased opinion, Oracle is THE DB of choice when it comes to needing performance.
That being said, I can't see PP forking over the licensing costs for Oracle any time soon...
I think the trick is to figure out where the performance bottleneck actually is, as was alluded to before (memory, disk io, or cpu contention), and then see if you can't re-architect some of the hardware to eliminate that bottleneck.
If you can keep all searches in memory, that's IDEAL. As soon as you hit physical disk, you suffer at least a 1000% performance hit.
If you're running into CPU issues (all cores are maxed out), then you need to add more CPU cores.
If you're drives aren't responding fast enough, then you may have to upgrade your existing drives or add more spindles. It's ALL about RPM's and number of spindles. If you have 10k RPM drives, simply replacing them with 15k will give you a 40% response time increase.
Otherwise, add more drives, and ensure that you're running the proper RAID for your requirements. RAID 5 is a compromise. RAID 10 is ideal, but is your most expensive.
Also, ensure that you have multiple IO paths to your drives... check to see if you're maxing out your drive throughput.
If you're running out of memory, then you need to add more, unless you're already max'd out.
But of course, it's sometimes not possible to keep everything in RAM (at least on one box).
I've seen other sites set things up in this kind of scenario (large amount of searchable data, MySQL, etc) so that the main tables are on physical drives on the local box (hardware/software RAID-whatever works), but the search indexes are spread out onto other machines via iSCSI mount points (using netfiler, etc), from other boxes, on a switched 1-10GB network. Those other boxes are nothing more than diskless servers with a net boot, and monster amounts of RAM. The netfiler iSCSI mounts are actually 32GB RAM disks.
6 plain jane boxes with nothing but RAM in them, cheaper than licensing other products, etc.
It's like a poor-man's implementation of a solid state drive set up.
The only caveat is that if you reboot the RAMDISK boxes, you have to rebuild the search indexes. Typically, though, that doesn't take that long to do, even on large sites. (Usually under 10 minutes, depending on size). If you have to do that once every 6 months, that's a reasonable trade-off.
Of course, it all depends on the details... how many rows in base tables, what columns in what tables are indexed for search, how large those text search indexes are, etc., etc.
I haven't really seen any specific numbers on that stuff though so I can't comment on it.
$0.02
...jeff