![]() |
Re: Lousy day - and it's not over! (Work rant)
Quote:
I see things are the same on your side of the street as well. I can't wait till this whole thing is over. We're all miserable here right now |
z...did you, umm..try restarting the computer first? :)
ryan |
Quote:
Also a Fortran/Cobol guy here, trained on the old IBM 360 using punch cards and jumper boards. MS is a pile of doggy crap on many things. Some of their networking and RAID issues is unbelieveable... |
Quote:
|
Shoulda given in and banged your colleague that was 'harassing' you. Then she'd still be there to help you.
Also, unless you're making the absolutely unattainable megabucks, you SERIOUSLY need a new job, ianc |
Quote:
-Z. |
Not a regular poster here, but I had to share my sympathies with you. We're at the beginning of a year+ long project to pull all PCs out of our medical facilities nationwide, and replace them with Citrix clients coming back into our datacenter. We're doing this without testing... just taking the word of the consultants brought in to build the solution that it will all work. You can guess how well that has played out through our pilot sites so far...
You had me flinching when you talked about MS clustering. I'm officially a Windows-based sysadmin. We are moving all file & print services enterprise-wide to blade servers attached to a SAN I just got run through training on. The kicker? Its all going to be based on MS clustering. I know that one of these days the OS will break, and it won't be pretty... *shudder* I'm drinking a beer for you tonight. |
Quote:
Every time we've had one of the servers take a hard hit, it has been brutal to get things back up. Often the problem is that the server that crashed put a reserve on a LUN in the storage device. The storage device will not release the reserve unless the original server comes back and says it can. So while the other server in the cluster knows that the crashed server isn't up (via the heartbeat ethernet connection) it can't see the disk since the storage device is waiting for the crashed server to respond. (Not bad explanation of MS for a mainframe guy, eh?! :D ) I think the answer lies on the server level - in the event that a server in a cluster is lost, the healthy server should be able to mimick the signature (server name and WWPN) of the bad server and indicate to the storage device that the reserve is no longer needed. Currently, the only way we were able to resolve the reserve was to bring up the corrupt server (rebuild to op sys) and get it to communicate with the storage device, afterwhich, it took several attempts to get the two servers to not only communicate with each other, but to recognize all the disks attached to it. Ok, too much techno babble, but someday shrouded may need this information! |
is emc still the major player in the data storage arena? at least on the hardware side? i used to work as a recruiter in the data storage only field (storbyte.com). unfortunately, back in 2002, so many of the software side companies were falling off of the map..either being bought out or disappearing altogether. i had storage guy resumes running out my ears and nobody was hiring. final straw was a new-hire requisition i had for a sales guy at sun was pulled after end-of-fiscal year meetings..sun decided to not only freeze hiring, but actually let people go as well..there went a guranteed 30k in commissions out the window. sun was my largest client. if i'd only gotten the guy placed the month before, i might still be in business. :(
ryan |
Quote:
I wasn't on the other though. If your job is making you that unhappy, you should be looking for another one real hard. 10 years down the road you will be a pretty unhappy camper when you look back, even if you are raking it in. I also am a sysadmin. We have a Dell-branded EMC FC4700 that has been nothing but headaches. My experience is the opposite of other people's here though. We have four MS clusters: Exchange, File, Financial, and SQL. The MS clusters NEVER screw up; it is always the EMC that is causing me headaches. More than once I've been here to 3-4 AM cursing them. We will be going with Netapp next time. ianc |
Quote:
|
Quote:
------------------------------------------- ianc: While I have gone through some difficult times here at work, when things settle down, it's not as bad. Unfortunately, with the CEO announcing that more layoffs will happen soon, the atmosphere in the office is very strained, to say the least. I love doing what I do (Storage admin), and I love working for a car company. (Cause I'm a car nut). It's just that sometimes I get overwhelmed at work, and sometimes people take their personal agendas to far at work. IT's just work. I only work here so I can have enough $$ to be able to play when I'm not here! Quote:
------------------------------------------- bigchillcar: Yep, EMC is still a big player in the storage field, but IBM has really taken the lead on SAN devices, IMHO. There are other storage companies popping up too - one that looks promising is a company called Xiotech. If we weren't so committed to IBM storage, I'd have these guys in here to at least demo a box. It is interesting how the storage field was shrinking a couple of years ago, but now, with the advent of SAN infastructures, it has really grown. All good stuff! Now, where can I find a couple of ferrets?!? :eek: -Z-man. |
Quote:
|
Quote:
ryan |
I have to ask.....why in the he11 would they use RAID5 for an OS? It's slow as heck anyway, then degrades exponentially on a disk fialure/rebuild. It's ok for a read-only data archive, but not for an OS.... RAID 0+1 is your friend :)
At my last job, we lost 35TB of data, yes, lost it, when two disks of the same RSS (redundant stripe set) group failed and the storage unit (HP EVA 5000) forgot about it's disk group, RAID level, and disk members..... THAT was a long day, er uh, week! Oh, and HP said it was impossible for that to happen. We got the ol' "one in a million" comment...... I feel for you man! -B |
All times are GMT -8. The time now is 09:52 PM. |
Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
Search Engine Optimization by vBSEO 3.6.0
Copyright 2025 Pelican Parts, LLC - Posts may be archived for display on the Pelican Parts Website