Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
Good thing, we’re talking with the client to setup a new server, where I configure everything and use NGINX so I can force a much more aggressive server side caching for when they have heavy loads on the site.
Their site usually have very little traffic, except when they have and emergency or press release -
We also encountered some errors and the Symfony logs and cache were 30gb. I cleared the cache and logs then the error went away. Thankfully our DB tables were not corrupted. It was only in our Dev server though 😂
-
@Bitwise I wasn't aware that full drive could corrupt tables. I've had a full drive on one of my private servers, or full partition, and mysql just complained. Maybe I noticed it before it became a problem.
We had debugging enabled because the app we made for the client, wasn't working properly for one user (only one out of thousands) and we had debugging enabled to see what was happening.
But because of old php version, it was throwing warnings etc etc. We never heard back from that one guy with app issues, so debugging was just left enabled. Unbenounced that it'd become a 30G file
Related Rants
-
samuelpearson14Year 2013 - Trying this thing called Bitcoin - Setup wallet in spare USB drive - Buy 0.5 BTC (couldn’t affo...
-
JMoodyFWD8I think I've shown in my past rants and comments that I'm pretty experienced. Looking back though, I was reall...
-
Brolls29Story time. Not sure it counts as data loss, more temporary corruption (and in my own brain). > be me. > b...
The website for our biggest client went down and the server went haywire. Though for this client we don’t provide any infrastructure, so we called their it partner to start figuring this out.
They started blaming us, asking is if we had upgraded the website or changed any PHP settings, which all were a firm no from us. So they told us they had competent people working on the matter.
TL;DR their people isn’t competent and I ended up fixing the issue.
Hours go by, nothing happens, client calls us and we call the it partner, nothing, they don’t understand anything. Told us they can’t find any logs etc.
So we setup a conference call with our CXO, me, another dev and a few people from the it partner.
At this point I’m just asking them if they’ve looked at this and this, no good answer, I fetch a long ethernet cable from my desk, pull it to the CXO’s office and hook up my laptop to start looking into things myself.
IT partner still can’t find anything wrong. I tail the httpd error log and see thousands upon thousands of warning messages about mysql being loaded twice, but that’s not the issue here.
Check top and see there’s 257 instances of httpd, whereas 256 is spawned by httpd, mysql is using 600% cpu and whenever I try to connect to mysql through cli it throws me a too many connections error.
I heard the IT partner talking about a ddos attack, so I asked them to pull it off the public network and only give us access through our vpn. They do that, reboot server, same problems.
Finally we get the it partner to rollback the vm to earlier last night. Everything works great, 30 min later, it crashes again. At this point I’m getting tired and frustrated, this isn’t my job, I thought they had competent people working on this.
I noticed that the db had a few corrupted tables, and ask the it partner to get a dba to look at it. No prevail.
5’o’clock is here, we decide to give the vm rollback another try, but first we go home, get some dinner and resume at 6pm. I had told them I wanted to be in on this call, and said let me try this time.
They spend ages doing the rollback, and then for some reason they have to reconfigure the network and shit. Once it booted, I told their tech to stop mysqld and httpd immediately and prevent it from start at boot.
I can now look at the logs that is leading to this issue. I noticed our debug flag was on and had generated a 30gb log file. Tail it and see it’s what I’d expect, warmings and warnings, And all other logs for mysql and apache is huge, so the drive is full. Just gotta delete it.
I quietly start apache and mysql, see the website is working fine, shut it down and just take a copy of the var/lib/mysql directory and etc directory just go have backups.
Starting to connect a few dots, but I wasn’t exactly sure if it was right. Had the full drive caused mysql to corrupt itself? Only one way to find out. Start apache and mysql back up, and just wait and see. Meanwhile I fixed that mysql being loaded twice. Some genius had put load mysql.so at the top and bottom of php ini.
While waiting on the server to crash again, I’m talking to the it support guy, who told me they haven’t updated anything on the server except security patches now and then, and they didn’t have anyone familiar with this setup. No shit, it’s running php 5.3 -.-
Website up and running 1.5 later, mission accomplished.
rant
wk98
“competent” it partner