Skip to content
ithewei edited this page Jul 29, 2023 · 23 revisions

Libhv is a cross-platform c/c++ networking library for developing TCP/UDP/SSL/HTTP/WebSocket client/server.

Q: The name of libhv

Like libevent, libev, and libuv, libhv provides event-loop with non-blocking IO and timer, but simpler api and richer protocols.
The name of libhv is also inherited from this, implying High-performance event loop library.

Q: What is the difference between libhv and libevent, libev, libuv?

  • Libevent is the oldest and has historical baggage. Although bufferevent is very subtle, it is difficult to understand and use;
  • libev can be said to be a simplified version of libevent, the code is extremely streamlined, but too many macro definitions are used, the code is unreadable, and it is not well implemented on Windows;
  • libuv is the C low-level library of nodejs. It was first supported by Libevent+ for Windows IOCP and later rewritten as a whole. At the same time, asynchronous reading and writing of pipes and files are realized. It is very powerful, but has more structure and deeper encapsulation. uv_write personally feels bad;
  • libhv itself refers to the realization ideas of libevent, libev, and libuv. Their core is the event loop (that is, processing IO, timer and other events in an event loop), but the interface provided is the most streamlined, and the API is the closest to the native system call, which is the easiest to get started;
  • For the specific compilation of these libraries, see https://github.com/ithewei/libhv/tree/master/echo-servers
  • In addition, libhv integrates SSL/TLS encrypted communication, supports heartbeat, forwarding, unpacking, multi-threaded safe writing and closing functions, and implements HTTP, WebSocket and other protocols; -Of course, the performance of these libraries is close, and they all maximize non-blocking IO reuse;

Q: Positioning of libhv

  • Exquisite, small, cross-platform, simple, practical and easy to use
  • The base module encapsulates a lot of cross-platform code, such as atomic, thread, thread synchronization, of course this is based on the two headers hconfig.h and hplatform.h automatically generated by configure/cmake The platform macros, compiler macros, etc. provided in the file are implemented;
  • The event module implements the event loop (including IO, timer, idle), and different platforms have different implementations, such as Linux using epoll, Windows using IOCP, Mac using kqueue, Solaris uses evport, if you are interested, you can read the source code under event;
  • The http module implements the most common application layer protocol http protocol in this century based on the event module, including http server and client. The httpd provided under examples in libhv has a performance comparable to nginx services;

Q: Plan of libhv

Implement more common application layer protocols based on the event module, such as MQTT, redis, kafka, mysql, etc.;
More plans see docs/PLAN.md.

Q: Performance of libhv

Q: How to get started with libhv?

  • It is recommended to start by running the getting_started.sh script in the root directory of the project. You will be attracted by the convenience shown by libhv's httpd;
  • Look at the sample code under examples;
  • Source code reading recommended route base->event->http;

Q: How to use libhv?

Libhv can compile dynamic libraries and static libraries through Makefile or cmake, and include related header files after make install (the header files under the base module are scattered, you can directly #include "hv.h") and Link library files can be used; of course, libhv modules are clearly divided and low-coupling, you can also directly take the source files to your own project to compile, such as logging functions hlog.h and hlog.c can be used directly use.

Q: How to cross-compile libhv?

Makefile:

sudo apt install gcc-arm-linux-gnueabi g++-arm-linux-gnueabi
export CROSS_COMPILE=arm-linux-gnueabi-
./configure
make clean
make libhv

or cmake:

mkdir build
cd build
cmake .. -DCMAKE_C_COMPILER=arm-linux-gnueabi-gcc -DCMAKE_CXX_COMPILER=arm-linux-gnueabi-g++
cmake --build . --target libhv libhv_static

For more introduction to the compilation platform and compilation options, see BUILD.md

Q: Requirement on Windows

The minimum requirement for VS version on Windows is VS2015. Because a modern c++ JSON parsing library nlohmann::json is used in the http module.

If you want to compile with a lower version of vs or only use the c language, you can turn off the modules WITH_EVPP and WITH_HTTP that use c++11 in cmake, and only compile c modules such as base and event.

Q:Link library on Windows

Generate vs project on Windows via cmake. Open hv.sln and compile, it will generate header file include/hv, static library lib/hv_static.lib and dynamic library lib/hv.dll, so there are dynamic library and static library methods:

1、hv.lib + hv.dll

  • Option I: Project -> Properties -> Linker -> Input -> Addtional Dependencies add hv.lib;
  • Option II: Add #pragma comment(lib, "hv.lib") into your code;

2、HV_STATICLIB + hv_static.lib

  • Project --> Properties --> c/c++ --> Preprocessor --> add precompiled macro HV_STATICLIB,for canceling the dynamic library import macro in the hexport.h header file #define HV_EXPORT __declspec(dllimport)
  • Project -> Properties -> Linker -> Input -> Addtional Dependencies add hv_static.lib
    or add #pragma comment(lib, "hv_static.lib") into your code;

Q:How to enable SSL/TLS、https、wss?

libhv integrates openssl to support SSL/TLS encrypted communication, by turning on the WITH_OPENSSL option in config.mk or CMakeList.txt, then compile it.

Makefile:

./configure --with-openssl
make clean && make && sudo make install

cmake:

mkdir build
cd build
cmake .. -DWITH_OPENSSL=ON
cmake --build .
sudo cmake --install .

Test https:

bin/httpd -s restart -d
bin/curl -v http://localhost:8080
bin/curl -v https://localhost:8443
# curl -v https://127.0.0.1:8443 --insecure

https example refer to examples/http_server_test.cpp TEST_HTTPS related code
wss example refer to examples/websocket_server_test.cpp TEST_WSS related code.

Q: How to upload and download file by HTTP?

Upload file:

  • Only upload file, set Content-Type, such as image/jpeg, and read the content of the file into the body, see HttpMessage::File, requests::uploadFile interface;
  • For uploading file + other parameters, it is recommended to use formdata format, that is Content-Type: multipart/form-data, see HttpMessage::FormFile, requests::uploadFormFile interface;
  • If you have to use the json format, you need to base64 encode the binary file and assign it to the body;

Download file:

  • httpd service comes with static resource service, set document_root, you can download files in this directory through url, such as wget http://ip:port/path/to/filename
  • For downloading large file, it is recommended to use Range header fragmentation request. For details, please refer to examples/wget.cpp

Q: How to respond asynchronously by HTTP?

To write an http server, it is strongly recommended to read through examples/httpd, there is everything you want.

  • async refer to /async;
  • timer refer to Handler::setTimeout;
  • json refer to Handler::json;
  • formdata refer to Handler::form;
  • urlencoded refer to Handler::kv;
  • restful refer to Handler::restful
// sync handler
typedef std::function<int(HttpRequest* req, HttpResponse* resp)>                            http_sync_handler;
// async handler
typedef std::function<void(const HttpRequestPtr& req, const HttpResponseWriterPtr& writer)> http_async_handler;
// Similar to nodejs koa's ctx handler: Compatible with the latest writing of the above two handlers, you can decide whether to respond synchronously or asynchronously in the callback.
typedef std::function<int(const HttpContextPtr& ctx)>                                       http_ctx_handler;

Because of historical compatibility reasons, while retaining the handler that supports the above three formats, users can choose the appropriate handler according to their business and interface time-consuming. If you are using a newer version of libhv, it is recommended to use the HttpContext parameter The http_ctx_handler.

See the equivalent writing of the three handler:

    // sync handler: run on IO thread
    router.POST("/echo", [](HttpRequest* req, HttpResponse* resp) {
        resp->content_type = req->content_type;
        resp->body = req->body;
        return 200;
    });

    // async handler:run on hv::async GlobalThreadPool
    router.POST("/echo", [](const HttpRequestPtr& req, const HttpResponseWriterPtr& writer) {
        writer->Begin();
        writer->WriteStatus(HTTP_STATUS_OK);
        writer->WriteHeader("Content-Type", req->GetHeader("Content-Type"));
        writer->WriteBody(req->body);
        writer->End();
    });

    // The handler with HttpContext parameter is the latest wording compatible with synchronous/asynchronous handler, it is recommended to use
    // The callback function runs on the IO thread, which can be thrown into the global thread pool for processing through hv::async, or its own consumer thread/thread pool
     // HttpContext contains HttpRequest and HttpResponseWriter member variables, referring to nodejs koa provides a series of member functions for operating HttpRequest and HttpResponse, and the writing is more concise
    router.POST("/echo", [](const HttpContextPtr& ctx) {
        return ctx->send(ctx->body(), ctx->type());
    });

    router.POST("/echo", [](const HttpContextPtr& ctx) {
        // The demo is thrown into the hv::async global thread pool for processing, but it is recommended to throw it into your own consumer thread/thread pool for actual use.
        hv::async([ctx]() {
            ctx->send(ctx->body(), ctx->type());
        });
        return 0;
    });

Tips:

  • std::async has different implementations under different C++ runtime libraries, some are thread pools, some are to start a new thread on the spot, and the return value will also be blocked while waiting, not recommended, you can use hv ::async instead (requires #include “hasync.h”), you can configure the minimum number of threads, the maximum number of threads, and the maximum idle time of the global thread pool through hv::async::startup, hv:: async::cleanup is used to destroy the global thread pool;
  • Regarding the consideration of whether it needs to be thrown into the consumer thread to process the request: In a scenario where the concurrency is not high, multithreading can be satisfied by setting worker_threads. It takes more than seconds to consider throwing HttpContextPtr to the consumer thread pool for processing;
  • For the sending of large files, please refer to largeFileHandler in examples/httpd, and read the thread separately. File -> Send, but pay attention to flow control, because disk IO is always faster than network IO, or the other party's reception is too slow, which will cause the sent data to be backlogged in the sending cache, consuming a lot of memory. In the example, it is judged The return value of WriteBody adjusts the sleep time of sleep to control the sending speed. Of course, you can also get the socket through hio_fd(ctx->writer->io()) and set it to block to send; or set ctx->writer->onwrite monitors the write completion event statistics write data to decide whether to continue sending; or through hio_write_bufsize to obtain the current write buffer backlog bytes to decide whether to continue sending;
  • For sending real-time stream data whose length is not known in advance, you can use the chunked method. The basic flow in the callback is Begin -> EndHeaders("Transfer-Encoding", "chunked") -> WriteChunked -> WriteChunked -> .. . -> End

Q: How to handle TCP streaming?

Libhv provides an interface for setting unpacking rules. For the c interface, see hio_set_unpack, for the c++ interface, see SocketChannel::setUnpack. It supports three common unpacking methods of fixed packet length, delimiter, and header length field. After calling this interface to set the unpacking rules, the sticky package and sub-packaging will be processed internally according to the unpacking rules to ensure that the callback is a complete package of data, which greatly saves the cost of processing sticky and sub-packaged by the upper layer. The interface is specifically defined as follows:

typedef enum {
    UNPACK_BY_FIXED_LENGTH  = 1,
    UNPACK_BY_DELIMITER     = 2,    // for example: “\r\n”
    UNPACK_BY_LENGTH_FIELD  = 3,
} unpack_mode_e;

#define DEFAULT_PACKAGE_MAX_LENGTH  (1 << 21)   // 2M

// UNPACK_BY_DELIMITER
#define PACKAGE_MAX_DELIMITER_BYTES 8

// UNPACK_BY_LENGTH_FIELD
typedef enum {
    ENCODE_BY_VARINT        = 17,
    ENCODE_BY_LITTEL_ENDIAN = LITTLE_ENDIAN,
    ENCODE_BY_BIG_ENDIAN    = BIG_ENDIAN,
} unpack_coding_e;

typedef struct unpack_setting_s {
    unpack_mode_e   mode;
    unsigned int    package_max_length;
    // UNPACK_BY_FIXED_LENGTH
    unsigned int    fixed_length;
    // UNPACK_BY_DELIMITER
    unsigned char   delimiter[PACKAGE_MAX_DELIMITER_BYTES];
    unsigned short  delimiter_bytes;
    // UNPACK_BY_LENGTH_FIELD
    unsigned short  body_offset; // real_body_offset = body_offset + varint_bytes - length_field_bytes
    unsigned short  length_field_offset;
    unsigned short  length_field_bytes;
    unpack_coding_e length_field_coding;
#ifdef __cplusplus
    unpack_setting_s() {
        // Recommended setting:
        // head = flags:1byte + length:4bytes = 5bytes
        mode = UNPACK_BY_LENGTH_FIELD;
        package_max_length = DEFAULT_PACKAGE_MAX_LENGTH;
        fixed_length = 0;
        delimiter_bytes = 0;
        body_offset = 5;
        length_field_offset = 1;
        length_field_bytes = 4;
        length_field_coding = ENCODE_BY_BIG_ENDIAN;
    }
#endif
} unpack_setting_t;

HV_EXPORT void hio_set_unpack(hio_t* io, unpack_setting_t* setting);

For examples:

unpack_setting_t ftp_unpack_setting;
memset(&ftp_unpack_setting, 0, sizeof(unpack_setting_t));
ftp_unpack_setting.package_max_length = DEFAULT_PACKAGE_MAX_LENGTH;
ftp_unpack_setting.mode = UNPACK_BY_DELIMITER;
ftp_unpack_setting.delimiter[0] = '\r';
ftp_unpack_setting.delimiter[1] = '\n';
ftp_unpack_setting.delimiter_bytes = 2;
unpack_setting_t mqtt_unpack_setting = {
    .mode = UNPACK_BY_LENGTH_FIELD,
    .package_max_length = DEFAULT_PACKAGE_MAX_LENGTH,
    .body_offset = 2,
    .length_field_offset = 1,
    .length_field_bytes = 1,
    .length_field_coding = ENCODE_BY_VARINT,
};

Implement code in event/unpack.c,on the basis of the internal readbuf, unpacking and grouping directly on the spot, basically achieve zero copy, which is more efficient than throwing it to the upper layer for processing. If you are interested, you can study it.

examples refer to examples/jsonrpcexamples/protorpc