Posts tagged 'performance'
May 17, 2023
by
Michał KuratczykRabbitMQ 3.12 will be released soon with many new features and improvements.
This blog post focuses on the the performance-related differences.
The most important change is that the lazy mode for classic queues is now the standard behavior (more on this below).
The new implementation should be even more memory efficient
while proving higher throughput and lower latency than both lazy or non-lazy implementations did in earlier versions.
For even better performance, we highly recommend switching to classic queues version 2 (CQv2).
May 31, 2022
by
David AnsariRecent Erlang/OTP versions ship with Linux perf support.
This blog post provides step by step instructions on how you can create CPU and memory flame graphs in RabbitMQ to quickly and accurately detect performance bottlenecks.
We also provide examples of how flame graphs have helped us to increase message throughput in RabbitMQ.
May 16, 2022
by
Michał KuratczykRabbitMQ 3.10 was released on the 3rd of May 2022, with many new features and improvements.
This blog post gives an overview of the performance improvements
in that release. Long story short, you can expect higher throughput, lower latency and faster node startups,
especially with large definitions files imported on startup.
April 25, 2012
by
Simon MacMullenWelcome back! Last time we talked about flow control and
latency; today let’s talk about how different features affect
the performance we see. Here are some simple scenarios. As
before, they’re all variations on the theme of one publisher and
one consumer publishing as fast as they can.
April 16, 2012
by
Simon MacMullenSo today I would like to talk about some aspects of RabbitMQ’s
performance. There are a huge number of variables that feed into
the overall level of performance you can get from a RabbitMQ
server, and today we’re going to try tweaking some of them and
seeing what we can see.
October 27, 2011
by
Matthew SackmanSince the new persister arrived in RabbitMQ 2.0.0 (yes, it’s not so
new anymore), Rabbit has had a relatively good story to tell about
coping with queues that grow and grow and grow and reach sizes that
preclude them from being able to be held in RAM. Rabbit starts writing
out messages to disk fairly early on, and continues to do so at a
gentle rate so that by the time RAM gets really tight, we’ve done most
of the hard work already and thus avoid sudden bursts of
writes. Provided your message rates aren’t too high or too bursty,
this should all happen without any real impact on any connected
clients.
Some recent discussion with a client made us return to what we’d
thought was a fairly solved problem and has prompted us to make some
changes.
March 28, 2011
by
Vlad Alexandru IonescuIn our previous blog post we talked about a few approaches to topic routing optimization and described the two more important of these in brief. In this post, we will talk about a few things we tried when implementing the DFA, as well as some performance benchmarking we have done on the trie and the DFA.
September 14, 2010
by
Vlad Alexandru IonescuAmong other things, lately we have been preoccupied with improving RabbitMQ’s routing performance. In particular we have looked into speeding up topic exchanges by using a few well-known algorithms as well as some other tricks. We were able to reach solutions many times faster than our current implementation.