8
donuts
5y

Spent another half day learning ELK and how to programatically query and run aggregations against the data that's now collected.

So I can feed it into a testing framework for releases.

I sorta feel like I'm dragging everyone else into the light...

Like "you see what you've been missing all these years? This is how it's supposed to be these days..."

Data data data... Useful data.... This is what you can do when have structured and searchable logs rather then huge messy text files ...

Comments
  • 2
    ELK seems nice and all, but also resource heavy, and if I understood everything correctly it also has a few gaping holes if not configured properly.
  • 1
    @ScriptCoded right now it runs on it's own VM, so if the resources aren't used that be a waste.

    Not sure about filestash but I haven't seen anything odd.

    Maybe we don't get enough data. It takes a long time to set up because have to build everything from scratch... Rather than just bring onboarded so lots of trial and error and figuring stuff out...

    But the thing is just feeling like when are you all going to stop living in 90s, 00s... Get up to date already...
  • 0
    @billgates No doubt that ELK is great when set up, seem nice, but I doubt my own skills then it comes to setup
  • 1
    Be carefull with the F-ing ES indexs. ES wants to index All Things, and thats whats usualy causes it to be resource heavy. use an index whitelist (only few specific fields), and production resource use will go down significantly.
    You can later move the data to a dedicated ES instance, and index whatever there.
  • 0
    @magicMirror index whitelist? I tried data mapping which I think helped, it stopped indexing all the different query parameters

    My setup is logs are on prod server, they get shipped in real time to ELK instance. LS processes and loads them into the indexes.

    Most of the CPU I think go to kibana/ES queries I think. Seems to do searches and graphs pretty fast...
  • 1
    @billgates The issue with ES I'm referring to is the "cost" of adding data to the es store. If your logs are actually structured in a "good", and devs don't log dumb stuff, then no problems. But if the devs add logs with 20+ fields with random key/values - es will slow down significantly after a while, as it updates 100+ indexs per data point.

    Mapping should take care of the problem nicely. Reason most ELK deploy fail, is no Mapping - or as I call it "index whitelisting".
Add Comment