Skip to content

net/unicoap: tracking: Unified and Modular CoAP Stack #21389

@carl-tud

Description

@carl-tud

unicoap aims to address the need for a high-level, beginner-friendly CoAP interface and an extensible library design that can scale with additional CoAP transports.

This issue summarizes all PRs related to unicoap, provides a brief overview, and answers potential open questions. Design discussions are documented in #20792.

Pull Requests

Why do we need unicoap in RIOT?

RIOT currently features three CoAP libraries: GCoAP, nanoCoAP Sock, and nanoCoAP. These libraries were not designed to support other CoAP transports or automatic handling of features such as block-wise transfer. unicoap is a unified and modular CoAP suite for the constrained IoT. unicoap aims to address the need for a high-level, beginner-friendly CoAP interface and an extensible library design that can scale with the addition of CoAP transports.

More questions are addressed in the FAQ.

About unicoap

unicoap features a layered and modular design that enables different transports and advanced features like automatic block-wise transfers. The messaging model and PDU format (e.g., reliable transmission through CON,NON,ACK,RST) varies depending on the transport. Hence, support for each CoAP transport, such as CoAP over UDP or DTLS, is encapsulated in a RIOT module, which is called a driver in unicoap.

unicoap provides a clear separation of different messaging and transport properties. There are three distinct layers beneath the application:

  1. The exchange layer handles REST features (client/server functionality). Support for advanced features is modularized.
  2. The messaging layer implements the transport-dependent messaging model and framing.
  3. The transport layer coordinates with RIOT networking APIs.

The implementation of the messaging and transport layer is also modularized.

Layer separation in unicoap

FAQ

Can I use unicoap without a separate thread?

Yes. While unicoap uses a separate thread for processing inbound messages by default, you can run the loop function on a thread of your choice. The loop is implemented using an event_queue_t. If you do not want to use an event queue and are willing to roll your own transport handling, you can also call the processing function exposed by the messaging layer yourself.

Does unicoap allow me to emit messages into RAM?

Yes. You can use several accessors for method, status, and signal codes, payload and options. We also offer helper functions to parse and serialize messages to and from binary streams.

Zero-copy APIs: Is unicoap compatible with vectored APIs?

Yes. unicoap_message_t can either reference contiguous data (uint8_t*) or noncontiguous data (iolist_t*).
There are also helper functions for setting, getting, copying, appending.

When building a PDU, you can decide between a version that writes the entirety of the PDU into a contiguous buffer and one that populates an iolist with chunks, including potential payload chunks. This avoids copying data twice when the network backend copies frames into a larger buffer anyway to send the frame.

The CoAP over UDP driver additionally uses the zero-copy sock recv API. There is a configuration parameter you can set, guaranteeing the network backend never emits chunked data into the application (GNRC does not do that). In this case, another copy operation when receiving data from the network is avoided.

Is there a compatibility layer for nanoCoAP, nanoCoAP Sock, or GCoAP planned?

No.

Size and Performance Analysis

Build Sizes

The following charts compare unicoap, GCoAP, and nanoCoAP Sock in terms of required memory.

Click to expand...

We built three functionally equivalent applications. These applications incorporate client and server functionality. The unicoap application can be optionally compiled to support automatic block-wise transfers (client and server), message deduplication (server), and URI support (client). The unicoap server supports deferring responses until after the request handler has returned. For instance, this feature can be used by proxy applications.

In the following two figures, we measure the RAM and ROM consumption in the RIOT binary by summing up respective symbol sizes in the .text and .data sections and the RAM size from the .bss and .data sections. The -O3 optimization level has been used. We also deactivated safety checks in RIOT using DEVELHELP where possible and disabled logging.

Size groups. Memory sizes are grouped into the following constituents. Core refers to essential features, including message-related and option APIs, request/response state management as well as client and server functionality. As GCoAP depends on the nanoCoAP parser and option APIs, we added these symbols to the Core group of GCoAP. The same applies to nanoCoAP Sock.
Due to the layer separation and modularization in unicoap, we display the implementation of RFC 7252 messaging in a distinct category. The Core groups of GCoAP and nanoCoAP Sock include RFC 7252 instructions and state variables. In the UDP and DTLS groups, we summarize the respective driver implementations. To visualize the difference in impact on memory consumption, we include the TinyDTLS sock implementation in the DTLS group. The Application group contains all instructions and variables of the beforementioned sample application. Under the Block-wise, URI and Deferred response categories, we group symbols that are only present in the binary when the respective modules are present at compile-time.

Bars with a larger hatched pattern indicate the size of an optional feature. Both unicoap and nanoCoAP Sock allow running the processing loop on a thread of your choice.






Without DTLS support, unicoap contributes about 13 % to the ROM of the entire RIOT image. GCoAP adds about 10 % to the total ROM consumption. unicoap and GCoAP consistute ≈ 9 % and ≈ 11 % of the total RAM consumption, respectively. However, it is important to note that other compile-time configurations will lead to different results.

The small light blue bar for UDP support is caused by static socket allocations. In the nanoCoAP Sock example, the application allocates a socket on the stack, hence it is not present in the chart above.

Additionally, nanoCoAP Sock servers do not support DTLS.

The built-in block-wise transfer support in unicoap contributes another ≈ 0.9 KiB. However, this particular result needs to be interpreted with caution as the block-wise memory impact can be drastically reduced by modifying compile-time parameters. Our sample application was configured to support two parallel block-wise transfers, each with a 512-byte buffer. GCoAP
does not feature automatic block-wise transfers, neither does nanoCoAP Sock.

Compile-time configuration

Parameter Value Description
CONFIG_UNICOAP_PDU_SIZE_MAX 64 Size of retransmission buffer
CONFIG_UNICOAP_OPTIONS_BUFFER_DEFAULT_CAPACITY 24 Maximum size of options data
CONFIG_UNICOAP_BLOCK_SIZE 32 Size of payload per block message
CONFIG_UNICOAP_MEMOS_MAX 1 Number of active parallel exchanges
CONFIG_UNICOAP_RFC7252_TRANSMISSIONS_MAX 1 Maximum number of PDU copies for
CONFIG_UNICOAP_BLOCKWISE_TRANSFERS_MAX 1 Number of active parallel block-wise
CONFIG_UNICOAP_BLOCKWISE_BUFFERS_MAX 1 Number of buffers for slicing and
CONFIG_UNICOAP_BLOCKWISE_REPRESENTATION_SIZE_MAX 512 Maximum size of reassembled
CONFIG_UNICOAP_DEBUG_LOGGING 0 Warnings, error messages
CONFIG_GCOAP_PDU_BUF_SIZE 64 Size of retransmission buffer, influences total size of options
CONFIG_GCOAP_REQ_WAITING_MAX 1 Number of active parallel client exchanges
CONFIG_GCOAP_RESEND_BUFS_MAX 1 Maximum number of PDU copies for retransmission

Performance Analysis

The following figures compare unicoap with nanoCoAP in terms of execution time. Even though unicoap is more flexible, it is not necessarily slower.

Click to expand...


In this experiment, we measured the time to execute get, add/insert, and remove, given a message with a specified number of options. The results show the average time over multiple runs using the same parameter set relative to the number of existing options.

For each of the operations get, add/insert, and remove, we differentiate between a trivial case (i.e., the option is located at the end of the options buffer) and a complex case (i.e., the option is located in between other options).

The step effects occurring in (a) – (d) are artifacts of the 1 microsecond visualization resolution.

Getter

The average time needed to retrieve an option value grows linearly with the number of options present in the buffer, see (a) and (b). Both implementations require slightly less time when the option is present in the middle, see (b), because of the linear search, i.e., the algorithm finds an option in the middle earlier than at the end. The growth rate of unicoap is marginally higher in both cases because of sanity checks in unicoap.

Setter

In nanoCoAP (coap_opt_add_opaque method), when adding options, the timing is independent of the number of options already present. In unicoap, the timing scales linearly, see (c). The reason for this behavior is the following. nanoCoAP requires options to be inserted in-order and thus does not need to check whether there are adjacent options. unicoap, however, does not introduce such a requirement and, therefore, must first iterate over the list of options until the correct insertion slot is found. Moreover, unicoap implements safeguard checks to prevent adding options without sufficient buffer capacity.

Figure (d) shows the results when trailing options change and must be moved. These results do not contain data for nanoCoAP because inserting options out of order is not supported.

Remove

When removing CoAP options from message buffers, unicoap outperforms nanoCoAP, see Figures (e) and (f). On average, unicoapis faster by more than 45 microseconds. The cause of the outliers visible in Figure (f) has not been identified yet.

Parser

unicoap performs slightly better than nanoCoAP, in particular in cases of multiple options, see Figure (g).

Metadata

Metadata

Assignees

No one assigned

    Labels

    Area: CoAPArea: Constrained Application Protocol implementationsArea: networkArea: NetworkingType: trackingThe issue tracks and organizes the sub-tasks of a larger effort

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions