26

This is why I love GNU/Linux. No scary bullshit, no "call support", no secrecy. Fuck yes!

Comments
  • 5
    I'd love a file system that doesn't jump into inconsistencies. And WTF is that manual fsck? Why can't the system just fsck if it has fucked up itself? Running manually adds some magic dust or what?
  • 5
    @Fast-Nop it's about permissions, only administrator can do such things when something unexpected broke. Inconsistency was caused by me, not the system. I found it funny; your computer's way of fix itself is to tell the user what to do.
  • 3
    @Jilano and yes, It's the magic 🎶
  • 4
    @Fast-Nop yes, magic dust was added from 3.1.2 version
  • 3
    @stereohisteria I see, so if you want to check in that situation, you also need to give the root password? Then it makes sense to avoid giving arbitrary people root access if the file system has a hiccup.
  • 1
    It's less a user security matter than a process and priority issue. Some services cannot run without root permission and some other aren't meant to run with such privileges. Fsck checks and fixes sectors by writing, and for that root privileges are required. But the system boots directly to root when this is required.
  • 14
    Best feeling is when you deploy a system with LUKS, initramfs can't unlock the rootfs on its own (due to configuration errors which I've fixed, don't get a boner off of a Linux guy having his system fsck'd just yet Windows warriors!) yet you can make it work, mount rootfs to /new_root, exit and see the boot process continue as if nothing happened 😎
    Linux is amazing 😍

    Personally I'd also like to recommend btrfs, it deals with power losses and other sudden interruptions much better than ext4 does, and comes with a lot of features (snapshots, built-in RAID instead of mdadm etc) too.. and it's developed by the kernel team 😁
  • 1
  • 3
    "ERROR: Bailing out!" 🤣
  • 9
    @kenogo for the same reason that Debian based distros don't adopt the latest packages until half a decade after their release either.. if it isn't marked as stable for years yet, it's obviously way too new and unstable 😏
  • 1
    @Condor which is also why most people would rather base their servers on Debian than on Arch.
  • 7
    @Fast-Nop my internet servers run Ubuntu here, clients and local servers run Arch 🙂

    Granted, I had to step in a few times over the last couple years but overall the system is pretty stable. Because ancient!=stable and new!=unstable. Often I feel that Debian maintainers are pretty lazy for not compiling and publishing already, just like how governments are for not adopting new stuff. Also, I find their distrust in upstream testing truly disheartening. One should rely on developers' ability to deliver a stable OSS product more.

    Hence why I tend to disagree with the industry standard. Arch is just as stable as its operator's skillset. And yeah I don't want to have to fiddle around with configs all over the place just to set up an AP or do routing or make a Pi-Hole either.. that's where Arch really overcomplicates things. But putting deployments half a decade behind development and calling it stable isn't the answer to stable systems, be it clients or servers.

    But the industry standard being what it is, my mailers run Ubuntu. And now that 18.04 is around the corner, I can't upgrade without having to redeploy the whole damn thing due to the nature of milestone releases.. or risk configuration conflicts during dist-upgrade. Stable?
  • 0
    @Condor Ubuntu and Arch are perfectly fine for hobby projects. But a professional hoster with that setup is just out of question for anything serious simply for not even understanding why.
  • 7
    @Fast-Nop I recall that @Linux and @linuxxx work at hosters. In fact they are sysadmins just like me, though admittedly they do actually work in professional environments whereas I don't. Arch would definitely be out of the question there, and I wouldn't entrust an entire datacenter with it either. Arch is for hobby projects, it isn't unstable but it isn't tailored at the server market either.

    For hypervisors which all the hosted machines would run on, I'd go with Proxmox VE or VMware - the former of which I'm using to drive the VM's and ensure the separation between servers in my home network as well. Those are built for that exact purpose and while free and open source, they are very much production-grade. So is Ubuntu Server. Not talking about stupid desktop editions from them to be clear - I hate Unity and I'd chew on any "sysadmin" that's running GUI's on their servers. But the Ubuntu server editions are very much tailored at production servers.
  • 3
    @Fast-Nop
    No problems using Ubuntu in prod m8.
  • 1
    @Condor
    I do actually prefer Nutanix as an Hypervisor
  • 6
    @Linux first time I've heard of this one.. interesting! Will check it out 😃
  • 1
    @Condor
    They have a community edition that you can try!
  • 2
    @Linux @Condor All of my (non-heroku) servers run Ubuntu.

    I dont know what heroku uses. But I wouldn't be surprised if it was also an ubuntu-based container.
  • 0
    @Condor @Linux using mint and kubuntu in prod!
  • 1
    @linuxxx
    The terme "prod" applies to systems that servers a purpose like serving a website, database, caching and other stuff that is basically "not easily rebooted"

    I doubt you are using kubuntu and mint for that purpose and just as a client.
  • 0
    @Linux I personally call anything engaged in a production environment ‘prod’
  • 7
    @linuxxx Prod or production servers is generally referred to as servers that are available to the end-users (be that internet users or other servers or internal network users or whatever) and can indeed not be easily taken down. Rebooting is easy, just ensure that there's another one that's ready to accept connections in the meantime so that nobody experiences availability issues, or tell people that the server will go down in advance. Essentially those are the servers that you pray for that they stay stable, and that your changes won't mess up the system! Hence why dev and test/staging servers also exist in a full deployment process. In my case (the mailers) the dev and test servers don't exist because sadly I don't shit money, but anyway.. for a really good server setup, you'd at least have some dev host, a test host and two or more prod hosts that work in tandem.

    Client devices on the other hand.. eh, you don't want those to crash on ya either of course, but it's not really like the downtime will affect anyone else trying to connect to it.. and if there's people connecting to it, chances are that you've bigger problems than just maintaining uptime :P
    So yeah, clients are generally not really referred to as prod :)
  • 1
    @linuxxx
    What you personally say is not what other IT people say ;)

    Then most of the people here can say: " I use Arch with i3 in prod" or "I use OSX in prod"
Add Comment