zeromq with high latency

Solution for zeromq with high latency
is Given Below:

I’m trying to run a local, low latency control loop with zeromq using the PUB / SUB pattern.

However, on different standard Ubuntu LTS installations (from 16.xx – 20.xx) and different PCs all running the default kernel, I experience pretty high latencies between 0.3ms up to > 1ms.

My zeromq version is 4.3.2, the cppzmq version is 4.2 (but I experience the same issue with the node and PHP bindings as well).

Example outputs:

TOPIC                  RECV_US              SEND_US
[datawriter_CPLUSPLUS] 1627690147280.142090 1627690147279.663086
[datawriter_CPLUSPLUS] 1627690147380.287109 1627690147379.824951
[datawriter_CPLUSPLUS] 1627690147480.525879 1627690147480.058105
[datawriter_CPLUSPLUS] 1627690147580.789062 1627690147580.251953
[datawriter_CPLUSPLUS] 1627690147680.885010 1627690147680.388916
[datawriter_CPLUSPLUS] 1627690147781.051025 1627690147780.531982
[datawriter_CPLUSPLUS] 1627690147881.116943 1627690147880.676025
[datawriter_CPLUSPLUS] 1627690147981.365967 1627690147980.818115
[datawriter_CPLUSPLUS] 1627690148081.508057 1627690148080.954102
[datawriter_CPLUSPLUS] 1627690148181.571045 1627690148181.091064
[datawriter_CPLUSPLUS] 1627690148281.747070 1627690148281.235107
[datawriter_CPLUSPLUS] 1627690148381.841064 1627690148381.378906
[datawriter_CPLUSPLUS] 1627690148482.018066 1627690148481.541992
[datawriter_CPLUSPLUS] 1627690148582.245117 1627690148581.775879
[datawriter_CPLUSPLUS] 1627690148682.593018 1627690148681.972900

The output comes from running the following simple publisher and subscriber programs I wrote for debugging:

Publisher

#include "zhelpers.hpp"
#include <future>
#include <iostream>
#include <string>

int main()
{
    zmq::context_t ctx;
    zmq::socket_t publisher(ctx, zmq::socket_type::pub);
    publisher.bind("tcp://127.0.0.1:3000");

    struct timeval time;
    while (true) {
        gettimeofday(&time, NULL);
        unsigned long long microsec = ((unsigned long long)time.tv_sec * 1000000) + time.tv_usec;
        std::string string = std::to_string(microsec/1E3);
        zmq::message_t message(string.size());
        std::memcpy (message.data(), string.data(), string.size());

        publisher.send(zmq::str_buffer("datawriter_CPLUSPLUS"), zmq::send_flags::sndmore);
        publisher.send(message);
        std::this_thread::sleep_for(std::chrono::milliseconds(100));
    }
}

Subscriber


#include "zhelpers.hpp"
#include <future>
#include <iostream>
#include <string>

int main () {
    zmq::context_t context(1);
    zmq::socket_t subscriber (context, ZMQ_SUB);
    subscriber.connect("tcp://localhost:3000");
    subscriber.setsockopt( ZMQ_SUBSCRIBE, "datalogger_CPLUSPLUS", 1);
    
    struct timeval time;

    while (1) {
        std::string address = s_recv (subscriber);
        std::string contents = s_recv (subscriber);
        
        gettimeofday(&time, NULL);
        unsigned long long microsec = ((unsigned long long)time.tv_sec * 1000000) + time.tv_usec;
        std::string string = std::to_string(microsec/1E3);


        std::cout << "[" << address << "] " << string << " " << contents << std::endl;
    }
    return 0;
}

My target latency is below 100 microseconds instead of the current 300 – 1300 microseconds.
The above latencies look extremely high to me and I’m a bit out of ideas if this is an issue with my zeromq, the implementation or my system / kernel configuration.

ADDED

This is my machine’s context switch times which are pretty consistent throughout different runs:

./cpubench.sh
model name : AMD Ryzen 7 PRO 4750U with Radeon Graphics
1 physical CPUs, 8 cores/CPU, 2 hardware threads/core = 16 hw threads total
-- No CPU affinity --
10000000 system calls in 874207825ns (87.4ns/syscall)
2000000 process context switches in 4237346473ns (2118.7ns/ctxsw)
2000000  thread context switches in 4877734722ns (2438.9ns/ctxsw)
2000000  thread context switches in 318133810ns (159.1ns/ctxsw)
-- With CPU affinity --
10000000 system calls in 525663616ns (52.6ns/syscall)
2000000 process context switches in 2814706665ns (1407.4ns/ctxsw)
2000000  thread context switches in 2402846574ns (1201.4ns/ctxsw)
2000000  thread context switches in 407292570ns (203.6ns/ctxsw)

And this is a simple PHP redis script on a default installation local redis-server, having multiple times lower latency (<100us – 400us) than any c++/php/node zeromq implementation I could achieve:

1627695114039.4 1627695114039.2
1627695114139.8 1627695114139.6
1627695114240.1 1627695114239.9
1627695114340.3 1627695114340.2
1627695114440.5 1627695114440.3
1627695114540.7 1627695114540.6
1627695114640.9 1627695114640.8
1627695114741.2 1627695114741.1

The latency you’re measuring is from the call to gettimeofday() in the publisher, to the gettimeofday() in the subscriber. It is going to be varied by the differences between the two PC’s RTCs which, even if syncd with something like ntpd, are not going to be perfectly aligned. If you had the subscriber reflect the message back down another socket, then the publisher would be able to measure the round trip time.

Having said that, I would not expect latencies better than what you’re measuring on any data exchange via Ethernet, regardless. The traffic is too much at the mercy of everything else that is going on in the network, and in the PC’s concerned. If you need to guarantee one PC will react within 100us of an event on another PC, Ethernet / TCPIP / Linux / a PC is probably the wrong technology to use.

For example, if your PC’s CPU decides to change voltage / clock modes, the whole PC can stop for way, way longer than 100us whilst that is happening. I’ve seen some Xeon systems have whole-machine pauses for 300ms whilst such CPU mode changes happen. Such things are beyond the OS’es ability to control – it’s down at the firmware layer.