Pelican Parts Forums

Pelican Parts Forums (http://forums.pelicanparts.com/)
-   Porsche 911 Technical Forum (http://forums.pelicanparts.com/porsche-911-technical-forum/)
-   -   It may be time for the server upgrade... (http://forums.pelicanparts.com/porsche-911-technical-forum/357191-may-time-server-upgrade.html)

euro911sc 07-15-2007 12:26 PM

Wayne,

I'm sure that there are IT pros on the BBS that would gladly donate their time/resources to ensuring the BBS runs smoothly and rebuild etc are done as needed. I run my whole company on remote access so I'm sure that is not going to be an issue... I would donate my time if I thought you could use my skills, but I don't have any skills so that shoots that out the door ;)

As far a DB maint goes its all about keeping up with it and then you dont get hit with big corruptions... you know this I'm sure. Maybe you simply schedule it every Wed at 3am Cali time or some such...

Best regards,

Michael

DanL911sc 07-16-2007 02:01 PM

These phpBB based forums always seemed slow and cumbersome to me. Maybe the php code isn't very good, or maybe there's too many small images, or the database table layout may not be good... I don't know, but it shouldn't be slow.

For a fast example (that makes terrible use of frames), check out craigslist's forums (http://forums.losangeles.craigslist.org/?forumID=5). Speedy as h*ll, even though the frames make it un-navigable.

Wayne, is the the database data for the forum available publicly for those of us with the ability (check) and the time (hmm maybe not) to tinker and set up a test site using different/better software?

PS People have recommended http://www.punbb.org/ as a good alternative that is still phpBB based...

PPS Not that I'm volunteering, or anything :p

dshepp806 07-16-2007 02:20 PM

looks like we have one in DanL911CS.....heads up..

DanL911sc 07-16-2007 07:22 PM

Running just the forum list page through http://www.websiteoptimization.com shows some interesting stuff. The page should download in 7.71 seconds over T1 line, and take 19.13 over a 56k line... kind of slow.

The culprits appear to be (among other things) the page consisting of 37 unique objects (33 of them images), coming from 4 different sites. Just the request latency for those objects makes up the bulk of the delay over a T1 connection.

That said, once they are cached, it should be much faster than it is!

foamy911 07-16-2007 08:10 PM

Sent you a PM Wayne.

porschefool 07-17-2007 07:27 AM

Why not get rid of all the BMW Forum stuff? Nobody REALLY needs that, do they? I'm sure that's what's slowing everything down... I know they slow ME down!
;)

JK-81SC 07-17-2007 07:34 AM

Here's idea for Pelican Parts, but it may be a time/resource issue.

The idea is to allow one member to "thank" another member for helping them. There are countless professionals, shop owners and busy people who go out of their way to help others with trouble shooting problems and ways to fix our cars.

The idea would allow one member to give some "Pelican Bucks" to another member's account. This would allow us to show our gratitude and thank that person. I'd give someone $5 or $10 for a thank you, plus it would be easier than the gift certificate.

Pelican Parts gets to sell more merchandise when people redeem their Pelican Bucks. The major contributors get a few free parts for all their hard work. The forum keeps going, and everyone is happy.

The big question is how much work it would take Wayne's staff to implement the program. What do you think Wayne?

TroyGT 07-17-2007 09:06 AM

I love the site format and the way it works. Very smooth and intuitive. However it does appear to hiccup alot lately. Sounds like you've hit a software ceiling or capacity limit. Would load balancing or scaling the web servers help... ie have one or more internal frontend web servers connect to a single high end backend database?

Scott R 07-17-2007 09:43 AM

I remember Wayne posting his server configuration a while back, I need to search for that. But this a relatively small implementatoin of VBB. I've worked on a few now that had threads millions of views, and hundreds of thousands of post per thread, and far more complicated layouts.

The key is to have the right hardware and connection design. You need a decent four core server, OS on a mirror volume on channel A of a good SCSI RAID controller. The database needs to be on a RAID V on another channel, or preferably fiber attached to a SAN or Network attached to a NAS device.

Second is to get network offload engine NIC's, and preferably a network optimizer like a CISCO WASS or Riverbend device. After that you can support just about any level of load. Beyond that you get into server software configuration.

EDIT: Forgot something here, it's also nice to have a Cisco CSS or F5 and a few small boxes to handle the front end load.

foamy911 07-17-2007 11:16 AM

I doubt the bottleneck is CPU, I suspect it's disk IO, so not sure you need a 4 core processor.
Raid 5 is not as fast as 0+1, also raid 5 in degraded mode will suck much more than 0+1 in degraded mode. Yes do put the OS disk on it's own disk. I would have to vote a big NO for any network attached storage solution.
Trunking nic's would be another good idea.
I would start with some performance testing before making any changes, loadrunner and bonnie are good tools.

Scott R 07-17-2007 11:34 AM

Quote:

Originally posted by foamy911
I doubt the bottleneck is CPU, I suspect it's disk IO, so not sure you need a 4 core processor.
Raid 5 is not as fast as 0+1, also raid 5 in degraded mode will suck much more than 0+1 in degraded mode. Yes do put the OS disk on it's own disk. I would have to vote a big NO for any network attached storage solution.
Trunking nic's would be another good idea.
I would start with some performance testing before making any changes, loadrunner and bonnie are good tools.

The performance drop on RAID V does not outweigh the redundancy, any revenue generating database should be on 5. On todays controllers, using caching and etc you won't outrun the IO. The processor cores are cheap and allow for expansion, VBB is a multi thread capable platform, it makes great use of multiple cores.

foamy911 07-17-2007 01:28 PM

Raid 5 does not offer anymore redundancy than 0+1, it actually offers less. Raid 5 can not tolerate a double disk failure, where as 0+1 can if the disk failures are on the same mirror. 0+1 is also much faster than raid 5. The only real benefit of Raid 5 is larger size.
"any revenue generating database should be on 5",..... well we run all our databases that require serious power on 0+1.

dfink 07-17-2007 01:54 PM

For the last couple of weeks I have not been able to access the BBS using my mobile 5 phone. What ever you just did to rebuild the database appears to have corrected the problem with mobile 5 as it is now working.
If mobile 5 quits working again I will send you another PM. Perhaps it is a sign of things not working quite right.

azasadny 07-17-2007 02:42 PM

I have been having many problems connecting to the forums since last Sunday (7/15).

Scott R 07-17-2007 03:41 PM

Quote:

Originally posted by foamy911
Raid 5 does not offer anymore redundancy than 0+1, it actually offers less. Raid 5 can not tolerate a double disk failure, where as 0+1 can if the disk failures are on the same mirror. 0+1 is also much faster than raid 5. The only real benefit of Raid 5 is larger size.
"any revenue generating database should be on 5",..... well we run all our databases that require serious power on 0+1.

If it were as large as you say, it would be on 10, 0+1 is for small time stuff, like tier 3 applications , on low buck nas or Cellera. Five or 10 is still the industry standard for database, it's inexpensive, and you can use a low buck BCV on IDE in another storage platform. Or any sort of SRDF, or DSRM site mirror platform after that. If you're on 0+1 you're running some small time stuff, we don't even touch that technology on larger platforms, to many disks, and too slow.

My definition of serious power would 12 to 14 TB's on DMX3 with site to site SRDF. Raid 5 or 1o with Timefinder snapshot's and BCV's. In Wayne's case it would be cheaper to do 5 and, given the astronomical numbers on losing two disks in the same array, he should be ok. 0+1 is just wasteful anymore, and generally bad advice.

It's the same argument for active/passive clusters, too much idle equipment. I need three disks to do a five, you need four, and the performance gain over 10 is marginal at best. You just keep multiplying tjhose numbers up, and up 0+1 and you come to find out why most companies don't do it, it's just not worth it.

High cost and high overhead, no thanks, just bad advice from whoever came up with that solution for you, there are much better ways.

http://www.acnc.com/04_01_0_1.html

foamy911 07-17-2007 06:18 PM

Quote:

Originally posted by Scott R
If it were as large as you say, it would be on 10, 0+1 is for small time stuff, like tier 3 applications , on low buck nas or Cellera. Five or 10 is still the industry standard for database, it's inexpensive, and you can use a low buck BCV on IDE in another storage platform. Or any sort of SRDF, or DSRM site mirror platform after that. If you're on 0+1 you're running some small time stuff, we don't even touch that technology on larger platforms, to many disks, and too slow.

My definition of serious power would 12 to 14 TB's on DMX3 with site to site SRDF. Raid 5 or 1o with Timefinder snapshot's and BCV's. In Wayne's case it would be cheaper to do 5 and, given the astronomical numbers on losing two disks in the same array, he should be ok. 0+1 is just wasteful anymore, and generally bad advice.

It's the same argument for active/passive clusters, too much idle equipment. I need three disks to do a five, you need four, and the performance gain over 10 is marginal at best. You just keep multiplying tjhose numbers up, and up 0+1 and you come to find out why most companies don't do it, it's just not worth it.

High cost and high overhead, no thanks, just bad advice from whoever came up with that solution for you, there are much better ways.

http://www.acnc.com/04_01_0_1.html

By Raid 10 you actually mean Raid 1+0 and last time I checked low buck NAS solutions provided their own raid like NetApp. Ya we are small we only use EMC towers, Brocade Fiber switches and Fiber channel on Sun Fire 25k's for our larger DB's. Yes we do BCV splits for back ups. The volumes are managed by VxVm. Also our productions environments are mirrored to a DR site.
Now back to Waynes underlying issues, performance of his DB which is MySql running on M$. Until someone actually looks to see what the bottleneck is ( My bet is still on disk I/O ), no one can say what will fix it.
Migration to Linux would be a great start.

TroyGT 07-17-2007 06:34 PM

Tossing MySQL in favor of SQL2003 might help as well... I wouldn't run anything on that.

Scott R 07-17-2007 07:15 PM

Quote:

Originally posted by TroyGT
Tossing MySQL in favor of SQL2003 might help as well... I wouldn't run anything on that.
07' would be a better choice, 03' has a sunset date of dec. 07' We have done a ton of migrations to 07' this year and it's been easier than I imagined it would be.

DanL911sc 07-17-2007 07:42 PM

MySQL can handle the forum. Its just a load of text after all. Someone with admin access needs to do some measurement to figure out what's slow, because there's probably a simple answer or two that doesn't require a bunch of $$$ for new hardware and software.

foamy911 07-17-2007 09:24 PM

Quote:

Originally posted by DanL911sc
MySQL can handle the forum. Its just a load of text after all. Someone with admin access needs to do some measurement to figure out what's slow, because there's probably a simple answer or two that doesn't require a bunch of $$$ for new hardware and software.
I'll drink to that !!!!!!!!!!!!!!!


All times are GMT -8. The time now is 12:44 AM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
Search Engine Optimization by vBSEO 3.6.0
Copyright 2025 Pelican Parts, LLC - Posts may be archived for display on the Pelican Parts Website


DTO Garage Plus vBulletin Plugins by Drive Thru Online, Inc.