23
irene
5y

We have a long time developer that was fired last week. The customer decided that they did not want to be part of the new Microsoft Azure pattern. They didn't like being tied to a vendor that they had little control over; they were stuck in Windows monoliths for the last 20 years. They requested that we switch over to some open source tech with scalable patterns.

He got on the phone and told them that they were wrong to do it. "You are buying into a more expensive maintenance pattern!" "Microsoft gives the best pattern for sustaining a product!" "You need to follow their roadmap for long term success!" What a fanboy.

Now all of his work including his legacy stuff is dumped on me. I get to furiously build a solution based on scalable node containers for Kubernetes and some parts live in AWS Lambda. The customer is super happy with it so far and it deepened their resolve to avoid anything in the "Microsoft shop" pattern. But wow I'm drowning in work.

Comments
  • 22
    Switching the vendor-lockin from MS to Amazon does not look like an improvement to me.
  • 5
    @Oktokolo No vendor lock in. Just a few serverless apps running to reduce latency for the end user. If the Amazon burned down today the app would work the same but a bit slower for initial page loads depending on how far you are from the customers servers.
  • 7
    @Oktokolo For example the print PDF service is in a node container. If the user is in Greece they get the PDF rendered on the nearest AWS server. Otherwise they have to cross an ocean to send and get it from a server.
  • 0
    @irene
    Okay, that makes sense.
  • 2
    @Oktokolo Except the part where all the cloud stuff happens. Haha.
  • 3
    @irene
    If security or compliance is not an issue and you take care to not lockin to a single cloud provider, using the cloud is fine.
  • 0
    Yeah, k8s is always fun. Not sure it's needed in all cases, but hey, you get some resume fodder.
  • 3
    @Oktokolo Yeah. There are a bunch of people at work that go. “We need Azure DevOps real bad right now.”
    I say, “What do we gain by locking into it?” It turns out that 99% of the time they don’t know. Then I tell them that we don’t need to lock into a vendor.

    “You are the type of person that would get kidnapped and not leave when the kidnapper is out and leaves the door unlocked.”
  • 0
    Need some tool for porting code running on azure to an open stack.
  • 0
    @Wisecrack If you write it for an open stack you can run it in Azure containers, kubernetes, openshift. They get you by putting you into a devops pattern you can’t leave in one step. So collaboration tools, build tools, logs, integrations, and etc. Want to go somewhere else? It’s too hard.
  • 1
    Reminds me about the: “Don't get locked up into avoiding lock-in” article, which has valid points in my perspective:

    https://martinfowler.com/articles/...
  • 0
    The vendor lock exists from the moment you use proprietary system in cloud.
    I can build an app on docker or other containers and deploy without any modifications to Azure and AWS.
    But If I use, let’s say, Azure Storage as a service, well that part needs to be changed for AWS.
    Personally, I prefer build system only on cloud services, with no VMs / containers of any kind. Yes, it locks me with what Azure offers, but it’s plenty, at least for my needs.
  • 1
    @p0s1x That is a great article. I hadn’t read it before.
  • 0
    @NoToJavaScript not everything can be internet facing. I have had to make a number of systems that must continue to run when there is no internet connection available in a building. There are multiple buildings across the world. Those systems connect to a central cloud hosted application when a connection is available. You may occasionally have a collection of machines move to another location so you may have to re-home some of the on prem software. You literally can’t build that system without on premises cloud. Azure doesn’t work for that application but kubernetes does.
  • 2
    @irene article is hosted on Martin Fowlers site , I’m a big fan of martin fowler, and he has done great stuff for the software engineerings field. Highly recommend checking articles on his site.
  • 0
    Terraform? Get rid of any lock-in
  • 0
    You can switch from one managed PostgreSQL to another in minutes (well, however time it takes to copy data + modify config); you'll spend weeks or even months achieving the same kind of reliability, backup policies, scalability, fault tolerance... on your own, and even once your setup works, you'll never know how stable it really is until it's too late - something big cloud providers will always do better, because they have thousands of devs and hundreds of thousands of users, while your company has ... you. And that's just one service, an average small to medium HA stack uses file server, database, cache, core app, search engine, log management, system monitoring, firewall, load balancer, secret vault, ... with each of those requiring the same level of reliability and fault tolerance.
  • 0
    @hitko Not everyone builds software for small businesses. 🙄
  • 0
    @irene So? Are you going to tell me your "large enterprise" doesn't have a centralised config management? Or are you trying to say your company has a team of experienced sysadmins for each service and can totally ensure the same level of service for each part the stack as fully managed cloud solutions? That second one kinda contradicts the part where a single person could tell your customer to use Azure, and then dump it all on you...
  • 0
    @hitko I worked in a multi billion Euro international company that hires teams of engineers to manage on site systems and separate teams to manage IT. I now work for a company as a consultant so they put me in a context and I make stuff with whatever resources I have.
  • 0
    @hitko Normally you pick several dissimilar locations and do a basic install to prove that the solution architecture is flexible enough to work. You don’t tell them to “pick one or the other” you set up the candidate solutions and demonstrate them working with some basic functionality. In this case the customer was unwilling to set up the pilot Azure HCI instance because of licensing cost for a simple pilot.

    Often a large business acquires a smaller company. There is no centralized config management until the location is converted. Sometimes a location never gets converted if the cost of rebuilding industrial systems at a location is higher than IT cost of maintaining it for the rest of its operational life. Then new industrial systems are created using the newer standards that will allow them to be transplanted to another location.
  • 0
    @irene That doesn't change my point that a managed solution (in your case Outpost / Azure HCI / other managed HCI) is tested by a way larger audience and requires less input to achieve the same QoS than a self-managed solution (not to mention devs are generally more familiar with them and it's easier to find resources & people), and that with proper ops migrating from one managed service to another (or even using different vendors in a hybrid cloud arrangement) often provides better results than sticking to a self-managed services.

    I didn't mean a fully centralised config across all acquired products, if they're not converted it doesn't really matter since they're not dependant on your main stack and you're going to either leave them alone, or you'll have to rewrite configuration and provider implementation anyway. And if you bring them in properly, switching between managed solutions should be relatively easy compared to managing those solutions on your own.
  • 0
    @hitko So basically someone else’s managed services will always be better managed than the client’s managed services? There is a time to choose COTS and a time to build custom. If you blindly subscribe to “they will do it better” then COTS always makes sense.

    Acquisitions do need to be adapted because they get connected to a central management system that handles inventory movement and etc. So the adaptations for that happen in a transformed read only layer that sits at the edge of the network. Other locations will read data but they don’t write it. So the “main stack” exists at a location level and it flows data to a ”management stack” where it gets consumed in various ways.
  • 0
    @irene It will be better than anything you can do with the same amount of input a managed service takes. Obviously you can always hire an equal team of experts to manage it in-house, you can run a separate testing environment the scale of your actual deployment, and you can potentially even provide your own managed service to the world just for the sake of having a more varied real-world feedback to improve your in-house service, especially if you're trying to achieve the same level of service.

    I'm not trying to say business policies or savings one may make through cheaper hardware deals / cheap local workforce can't outweigh these things (or that a business can't opt for an in-house solution with lower QoS), I'm saying that with proper management at product level the input (in terms of man-hours) required to switch a service from one vendor to another is lower than the input required to manage that services in-house.
Add Comment