9

In my current org we had a AWS SES event processor written in node js, it was struggling everytime we had more than 1000 messages in queue. It looped over every single message made some db calls then processed the next message. At one point we had to run 300 comatiners of this thing to clear out the queue.. It was still horribly slow.

I rewrote it in Golang with channels and goroutines now we need to run a single comatiner to handle upto 100k messages in queue. Used 10 goroutines to pull 10 messages constantly and put them in a channel, then spawned 1 goroutine per message to process them quickly. I'm so proud of this solution, we then brought this workflow to many other event processing services. 😎

Comments
  • 2
    Nice!

    I had a similar problem to solve with a burst type service. 24 hours of 1-5 msg per minute, then 500k msgs over a 2-5 minutes. Fun!

    Go has a limit on how many go routines can exist at one time. Hopefully, you will not encounter that particular problem, as I did. My specific solution was to pull from the incoming queue, and push to an internal channel, with a limited pool of internal consumers to prevent a "the burst killed it" scenario. Also - that should smooth the memory use graph.
  • 1
    @magicMirror yes we did the same using error group with limit on the number of confusion. It Works beautifully!
  • 0
    I meant SQS not SES
  • 1
    Good one! Even I felt nice just by reading this. Lol
Add Comment