Site icon GetPageSpeed

NGINX Link Function Module: Embed C/C++ in NGINX

NGINX Link Function Module: Embed C/C++ Code Directly in NGINX

What if you could skip the application server entirely and run your C/C++ code inside NGINX itself? The NGINX link function module makes this possible. It dynamically loads shared libraries (.so files) and calls your functions directly from NGINX location blocks — no reverse proxy, no FastCGI, no overhead.

This approach delivers bare-metal performance for request handling. Your code runs in the same process as NGINX, with direct access to request headers, query parameters, request bodies, and shared memory across worker processes. For latency-sensitive endpoints like health checks, authentication gates, or lightweight APIs, this module eliminates every unnecessary layer between the client and your logic.

The module operates through a straightforward mechanism: dynamic linking at the NGINX level.

When NGINX starts, it loads your compiled shared library using dlopen(). Each location block can then map to a specific function exported from that library via ngx_link_func_call. When a request hits that location, NGINX invokes your function with a context struct (ngx_link_func_ctx_t) that provides access to:

Your function processes the request and writes a response — status code, content type, and body — all in a single function call. NGINX then sends this response back to the client just like any other response.

Additionally, the module provides lifecycle hooks. The ngx_link_func_init_cycle hook runs when NGINX starts or reloads. The ngx_link_func_exit_cycle hook runs when NGINX shuts down. These hooks allow you to initialize databases, open connections, or load configuration before any request arrives.

Architecture Comparison

Consider how a typical request flows through different architectures:

Traditional reverse proxy setup:
Client → NGINX → TCP/Unix socket → Application server → Your code → Response back through all layers

With the NGINX link function module:
Client → NGINX → Your function (in-process) → Response

The difference is significant. There is no inter-process communication, no serialization, and no context switching between processes. Your C/C++ function runs directly inside the NGINX worker process.

Installation

RHEL, CentOS, AlmaLinux, Rocky Linux

Install the module from the GetPageSpeed repository:

sudo dnf install https://extras.getpagespeed.com/release-latest.rpm
sudo dnf install nginx-module-link

Then load the module by adding this line at the very top of /etc/nginx/nginx.conf (before the events block):

load_module modules/ngx_http_link_func_module.so;

Debian and Ubuntu

Debian and Ubuntu packages for the NGINX link function module will be delivered in a future update. Stay tuned to the GetPageSpeed repository for availability announcements.

Configuration Reference

The NGINX link function module provides seven directives. All directive names start with the ngx_link_func_ prefix.

Context: server
Arguments: 1 (file path)

Specifies the path to your compiled shared library. This directive is required — without it, the module has nothing to call.

server {
    ngx_link_func_lib /etc/nginx/libmyapp.so;
}

The library is loaded when NGINX reads the configuration. If the file does not exist or cannot be loaded, NGINX will refuse to start. Therefore, always verify the path before reloading.

Context: location
Arguments: 1 (function name)

Maps a location to a specific function in your shared library. The function must have the signature void function_name(ngx_link_func_ctx_t *ctx).

location = /api/hello {
    ngx_link_func_call "hello_handler";
}

If the function does not exist in the loaded library, NGINX will log an error during startup. Moreover, every function referenced by ngx_link_func_call must be a symbol exported from the library.

Context: http (main)
Arguments: 1 (size, e.g., 10m, 1g)

Allocates a shared memory zone for the built-in cache. This shared memory is accessible from all NGINX worker processes, which makes it useful for storing session data, rate counters, or any cross-process state.

http {
    ngx_link_func_shm_size 10m;
    # ...
}

Without this directive, the shared_mem pointer in the context struct will be NULL. As a result, any attempt to use the caching API functions will fail.

Context: http, server
Arguments: 2 (key, value)

Adds a configuration property that your application code can read at runtime via ngx_link_func_get_prop(). This is useful for passing configuration values from NGINX to your C code without hardcoding them.

server {
    ngx_link_func_lib /etc/nginx/libmyapp.so;
    ngx_link_func_add_prop "db_host" "127.0.0.1";
    ngx_link_func_add_prop "db_port" "5432";
    ngx_link_func_add_prop "app_name" "MyService";
}

Context: http, server, location, if
Arguments: 2 (header name, header value)

Adds a custom request header before your function is called. The value can include NGINX variables.

location = /api/data {
    ngx_link_func_add_req_header "X-Real-IP" $remote_addr;
    ngx_link_func_call "data_handler";
}

This is particularly useful for passing NGINX variables (like client IP, SSL status, or upstream headers) into your C function.

Context: server
Arguments: 2-3 (URL, [headers], destination path)

Downloads a shared library from a remote HTTP or HTTPS URL at configuration load time. This feature enables centralized library distribution in cloud deployments.

server {
    ngx_link_func_download_link_lib
        "https://artifacts.example.com/libs/libmyapp.so"
        "Authorization: Bearer my-token"
        "/etc/nginx/libmyapp.so";
}

The download happens during NGINX startup. If the download fails, NGINX will not start. For HTTPS URLs, you may also need to specify a CA certificate.

Context: server
Arguments: 1 (file path)

Specifies the CA certificate file for verifying HTTPS connections when downloading libraries with ngx_link_func_download_link_lib.

server {
    ngx_link_func_ca_cert /etc/ssl/certs/ca-bundle.crt;
    ngx_link_func_download_link_lib
        "https://artifacts.example.com/libs/libmyapp.so"
        "/etc/nginx/libmyapp.so";
}

Let us build a practical example: a JSON API endpoint that responds to GET requests with a personalized greeting.

Step 1: Write the C Code

Create a file called hello_api.c:

#include <stdio.h>
#include <string.h>
#include <ngx_link_func_module.h>

void ngx_link_func_init_cycle(ngx_link_func_cycle_t* cycle) {
    ngx_link_func_cyc_log(info, cycle, "%s", "Hello API initialized");
}

void hello_json(ngx_link_func_ctx_t *ctx) {
    char *name = (char*) ngx_link_func_get_query_param(ctx, "name");
    char buf[256];
    int len;

    if (name) {
        len = snprintf(buf, sizeof(buf),
            "{\"message\": \"Hello, %s!\"}", name);
    } else {
        len = snprintf(buf, sizeof(buf),
            "{\"message\": \"Hello, World!\"}");
    }

    ngx_link_func_write_resp(
        ctx, 200, "200 OK",
        ngx_link_func_content_type_json,
        buf, len
    );
}

void ngx_link_func_exit_cycle(ngx_link_func_cycle_t* cycle) {
    ngx_link_func_cyc_log(info, cycle, "%s", "Hello API shutting down");
}

There are three key parts to every link function application:

  1. ngx_link_func_init_cycle — called when NGINX starts or reloads. Use it to initialize resources.
  2. Your handler functions — called when a matching request arrives. Read the request, process it, write a response.
  3. ngx_link_func_exit_cycle — called when NGINX shuts down or reloads. Clean up resources here.

Both lifecycle functions are required. If either is missing from your library, NGINX will log a warning.

Step 2: Compile the Shared Library

Compile your C code into a shared library and place it where NGINX can find it:

gcc -shared -o libhello_api.so -fPIC hello_api.c
sudo cp libhello_api.so /etc/nginx/

The ngx_link_func_module.h header must be in your include path — it ships with the module source.

Step 3: Configure NGINX

server {
    listen 8080;
    server_name localhost;

    ngx_link_func_lib /etc/nginx/libhello_api.so;

    location = /api/hello {
        ngx_link_func_call "hello_json";
    }
}

Step 4: Test

nginx -t && nginx -s reload

curl http://localhost:8080/api/hello
# {"message": "Hello, World!"}

curl "http://localhost:8080/api/hello?name=NGINX"
# {"message": "Hello, NGINX!"}

The response arrives with Content-Type: application/json and a 200 OK status — all set by your C code.

The C API Reference

The NGINX link function module exposes a comprehensive C API through the ngx_link_func_module.h header file. Here are the most important functions.

Accessing Request Data

Function Purpose
ctx->req_args Raw query string (e.g., name=world&lang=en)
ctx->req_body Request body content (for POST/PUT)
ctx->req_body_len Length of the request body in bytes
ngx_link_func_get_uri(ctx, &str) Get the request URI
ngx_link_func_get_query_param(ctx, "key") Get a specific query parameter value
ngx_link_func_get_header(ctx, "Host", 4) Get a request header by name
ngx_link_func_get_prop(ctx, "key", 3) Get a server property value

Writing Responses

ngx_link_func_write_resp(
    ctx,
    200,                                // HTTP status code
    "200 OK",                           // Status line
    ngx_link_func_content_type_json,    // Content-Type
    response_body,                      // Response body string
    response_length                     // Body length
);

The module provides convenient content type constants: ngx_link_func_content_type_plaintext, ngx_link_func_content_type_html, ngx_link_func_content_type_json, and ngx_link_func_content_type_xformencoded.

Manipulating Headers

// Add a response header
ngx_link_func_add_header_out(ctx, "X-Custom", 8, "value", 5);

// Add/modify a request header (for downstream processing)
ngx_link_func_add_header_in(ctx, "X-User-ID", 9, user_id, strlen(user_id));

Memory Management

Pool-based memory allocation is automatically freed when the request completes:

char *buf = ngx_link_func_palloc(ctx, 1024);   // Allocate from NGINX pool
char *buf = ngx_link_func_pcalloc(ctx, 1024);  // Allocate and zero-fill
char *copy = ngx_link_func_strdup(ctx, "text"); // Duplicate a string

Always use these functions instead of malloc() — they allocate from the NGINX request pool, which means you never need to call free().

Logging

ngx_link_func_log_info(ctx, "Processing request");
ngx_link_func_log_err(ctx, "Something went wrong");

// With formatting (up to 200 characters)
ngx_link_func_log(info, ctx, "User %s requested %s", user, path);

Log messages appear in the NGINX error log at the corresponding level (debug, info, warn, err).

Shared Memory and Cross-Worker Caching

One of the most powerful features is the built-in shared memory support. Since NGINX uses multiple worker processes, regular variables are not shared between them. The shared memory API solves this problem.

Enabling Shared Memory

Add ngx_link_func_shm_size in the http block:

http {
    ngx_link_func_shm_size 10m;
    # ...
}

Using the Cache API

// Store a value
char *cached = ngx_link_func_cache_new(ctx->shared_mem, "session:abc", 256);
if (cached) {
    strcpy(cached, "user_data_here");
}

// Retrieve a value
char *data = ngx_link_func_cache_get(ctx->shared_mem, "session:abc");
if (data) {
    // Use the cached data
}

// Remove a value
ngx_link_func_cache_remove(ctx->shared_mem, "session:abc");

Thread Safety

For operations that require atomicity, use the mutex API:

ngx_link_func_shmtx_lock(ctx->shared_mem);
// Critical section — safe across workers
char *counter = ngx_link_func_cache_get(ctx->shared_mem, "counter");
// ... modify counter ...
ngx_link_func_shmtx_unlock(ctx->shared_mem);

There is also ngx_link_func_shmtx_trylock() for non-blocking lock attempts, which returns immediately if the lock is held.

Practical Use Cases

Authentication Gateway

The module is well-suited for authentication logic that runs before the request reaches your application server:

void auth_check(ngx_link_func_ctx_t *ctx) {
    char *token = (char*) ngx_link_func_get_header(
        ctx, "Authorization", sizeof("Authorization") - 1);

    if (!token || !validate_token(token)) {
        ngx_link_func_write_resp(ctx, 401, "401 Unauthorized",
            ngx_link_func_content_type_json,
            "{\"error\": \"invalid token\"}", 25);
        return;
    }

    // Pass user info downstream
    ngx_link_func_add_header_in(ctx, "X-User-ID", 9, user_id, strlen(user_id));
    ngx_link_func_write_resp(ctx, 200, "200 OK",
        ngx_link_func_content_type_plaintext, "OK", 2);
}

For more NGINX-level authentication approaches, see also NGINX JWT authentication and NGINX digest authentication.

Health Check Endpoint

For load balancers that poll health endpoints, a link function handler responds with minimal overhead:

void health_check(ngx_link_func_ctx_t *ctx) {
    ngx_link_func_write_resp(ctx, 200, "200 OK",
        ngx_link_func_content_type_json,
        "{\"status\": \"healthy\"}", 20);
}

Request Routing with Properties

Use ngx_link_func_add_prop to configure routing behavior without recompiling:

server {
    ngx_link_func_lib /etc/nginx/librouter.so;
    ngx_link_func_add_prop "backend_v1" "http://10.0.0.1:3000";
    ngx_link_func_add_prop "backend_v2" "http://10.0.0.2:3000";

    location = /api/route {
        ngx_link_func_call "route_handler";
    }
}

Performance Considerations

The NGINX link function module adds negligible overhead per request because your code runs in-process. However, there are important considerations.

Do not block the worker process. NGINX worker processes are single-threaded event loops. If your function performs a blocking operation (database query, file I/O, sleep), it blocks the entire worker. All other connections handled by that worker will stall as a result.

For CPU-bound tasks, keep the work brief. For I/O-bound tasks, consider using the NGINX thread pool integration. The module supports aio threads when compiled with threading support.

Shared memory is limited. The cache API uses a slab allocator. Frequent allocations and deallocations of varying sizes can lead to fragmentation. Monitor your shared memory usage in production.

Library reloading requires NGINX restart. Unlike interpreted languages, C libraries are loaded at startup. To deploy new code, you must restart (not just reload) NGINX. Plan your deployment strategy accordingly.

Known Issue: Gzip Compression

Version 3.2.4 of the NGINX link function module has a known issue where gzip compression does not work on responses generated by the module. This happens because the code sets r->headers_out.content_type.len but does not set r->headers_out.content_type_len — a separate field that the NGINX gzip filter uses to determine whether compression should apply.

When the gzip filter sees content_type_len == 0, it assumes no content type is set and skips compression entirely. This affects all response types, regardless of your gzip_types configuration.

This bug is tracked in PR #19 on GitHub. A fix will be included in a future package update.

How does this approach compare to other ways of embedding code in NGINX?

Feature Link Function (C/C++) Lua (ngx_lua) njs (JavaScript)
Language C/C++ Lua JavaScript (subset)
Performance Highest (native) High (LuaJIT) Moderate
Development speed Slower Faster Faster
Shared memory Built-in API ngx.shared.DICT Limited
Debugging GDB/valgrind Print-based Print-based
Hot reload Requires restart Supports reload Supports reload
Learning curve Steep Moderate Gentle

Choose the NGINX link function module when you need maximum performance and are comfortable with C. Choose Lua when you want a balance of performance and development speed. Choose njs for simple transformations where JavaScript familiarity helps.

Security Best Practices

Running custom C code inside NGINX carries inherent risks. Follow these practices to minimize them.

Validate all input. Buffer overflows in your C code can compromise the entire NGINX process. Use snprintf() instead of sprintf(), always check string lengths, and never trust data from ctx->req_args or ctx->req_body without validation.

Use pool allocation. The ngx_link_func_palloc() functions allocate from the NGINX request pool, which is automatically freed after the request. If you use malloc(), you risk memory leaks that will accumulate over the lifetime of the worker process.

Limit shared library permissions. The .so file should be owned by root and not writable by the NGINX user:

chmod 755 /etc/nginx/libmyapp.so
chown root:root /etc/nginx/libmyapp.so

Avoid ngx_link_func_download_link_lib in production unless you have verified the download source and use HTTPS with proper CA certificate validation. Downloading executable code over the network at startup introduces a supply-chain risk.

Test with AddressSanitizer during development. Compile your application libraries with -fsanitize=address to catch memory errors before they reach production.

Troubleshooting

Module fails to load

Symptom: nginx: [emerg] dlopen() failed in error log.

Solution: Ensure you installed the nginx-module-link package that matches your NGINX version. The package from the GetPageSpeed repository is built to be compatible with the corresponding NGINX version. Run nginx -V to confirm your NGINX version and reinstall the module package if needed.

Function not found

Symptom: function "my_handler" not found in "/etc/nginx/libmyapp.so" during NGINX startup.

Solution: Verify the function is exported. Run nm -D /etc/nginx/libmyapp.so | grep my_handler. If the function does not appear, it may be declared as static or the symbol was stripped. Additionally, make sure both ngx_link_func_init_cycle and ngx_link_func_exit_cycle are defined in your library.

Empty response body

Symptom: NGINX returns 200 but the response body is empty.

Solution: Check that you are calling ngx_link_func_write_resp() in every code path. If your function returns without writing a response, NGINX will send an empty 200 response.

Shared memory errors

Symptom: ngx_link_func_cache_new returns NULL.

Solution: The shared memory zone may be full. Increase ngx_link_func_shm_size or remove unused cache entries. Also verify that ngx_link_func_shm_size is placed in the http block, not inside a server or location block.

Conclusion

The NGINX link function module bridges the gap between the web server and your application code by eliminating the application server entirely. For specific use cases — authentication, health checks, lightweight APIs, real-time data processing — it delivers performance that no interpreted language or reverse proxy setup can match.

The trade-off is clear: you gain raw speed but accept the responsibility of writing safe, correct C code that runs inside a critical piece of infrastructure. For teams with C expertise and performance-critical requirements, this module is a powerful addition to your toolbox.

The module source code is available on GitHub. For RPM-based installations, visit the GetPageSpeed module page.

D

Danila Vershinin

Founder & Lead Engineer

NGINX configuration and optimizationLinux system administrationWeb performance engineering

10+ years NGINX experience • Maintainer of GetPageSpeed RPM repository • Contributor to open-source NGINX modules

Exit mobile version