Skip to main content

NGINX / Server Setup

NGINX Upload Module: File Upload Handling Guide

by , , revisited on


We have by far the largest RPM repository with NGINX module packages and VMODs for Varnish. If you want to install NGINX, Varnish, and lots of useful performance/security software with smooth yum upgrades for production use, this is the repository for you.
Active subscription is required.

Handling file uploads efficiently is a critical challenge for web servers processing user-submitted content. The standard approach of passing entire file uploads through your application backend creates significant overhead. The NGINX upload module solves this problem by offloading file upload processing directly to NGINX, dramatically improving upload performance and reducing backend load.

What Problem Does the NGINX Upload Module Solve?

When users upload files through a web application, the traditional flow looks like this:

  1. Client sends multipart/form-data request to NGINX
  2. NGINX proxies the entire request body to the backend application
  3. Backend application parses the multipart data
  4. Backend writes file data to disk
  5. Backend processes the file and sends response

This approach has several significant drawbacks:

  • Memory consumption: The backend must buffer the entire upload in memory
  • Backend blocking: Application workers are occupied during the upload
  • Duplicate I/O: Data is written twiceβ€”first by NGINX, then by the application
  • No resumability: Dropped connections require starting over

The NGINX upload module addresses all of these issues. It handles file uploads directly at the NGINX level, before the request reaches your backend.

How the NGINX Upload Module Works

The NGINX upload module intercepts multipart/form-data POST requests and processes them according to RFC 1867. Instead of passing raw file data to your backend, it:

  1. Parses the multipart request body
  2. Saves uploaded files directly to a configured directory
  3. Strips file content from the request
  4. Replaces file fields with metadata (filename, path, size, checksums)
  5. Forwards the modified request to your backend

Your backend then receives a lightweight request containing only file metadata and any non-file form fields. It can process files by reading them from the paths NGINX provides.

Installation on RHEL-Based Systems

The NGINX upload module is available as a pre-built package from the GetPageSpeed repository. This is the recommended installation method for production servers running CentOS, Rocky Linux, AlmaLinux, or RHEL.

Enable the GetPageSpeed Repository

If you haven’t already enabled the GetPageSpeed repository, run:

sudo dnf install https://extras.getpagespeed.com/release-latest.rpm

Install the NGINX Upload Module

Install the module package:

sudo dnf install nginx-module-upload

Load the Module

Add the following line to the top of your /etc/nginx/nginx.conf file, before the events block:

load_module modules/ngx_http_upload_module.so;

Verify Installation

Confirm the module is loaded correctly:

nginx -t

You should see:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Configuration Directives Reference

The NGINX upload module provides extensive configuration options. Below is a complete reference of all available directives.

upload_pass

Syntax: upload_pass location
Default: β€”
Context: server, location

Specifies the location to pass the modified request body to after file processing. File fields are stripped and replaced with metadata fields.

location /upload {
    upload_pass @backend;
}

location @backend {
    proxy_pass http://127.0.0.1:8080;
}

upload_store

Syntax: upload_store directory [level1 [level2 [level3]]]
Default: β€”
Context: server, location

Specifies where uploaded files will be saved. You can use hashed subdirectories to prevent filesystem performance issues with many files. The level parameters define how many characters from the generated filename are used for each subdirectory level.

# Simple flat directory
upload_store /var/upload;

# Hashed directory with one level
upload_store /var/upload 1;

# Hashed directory with two levels
upload_store /var/upload 1 2;

When using hashed directories, create subdirectories before starting NGINX:

# For single-level hashing (upload_store /var/upload 1)
mkdir -p /var/upload
for i in $(seq 0 9); do
    mkdir -p /var/upload/$i
done

# For two-level hashing (upload_store /var/upload 1 2)
mkdir -p /var/upload
for i in $(seq 0 9); do
    for j in $(seq -w 00 99); do
        mkdir -p /var/upload/$i/$j
    done
done

upload_state_store

Syntax: upload_state_store directory [level1 [level2 [level3]]]
Default: β€”
Context: server, location

Specifies the directory for state files used by resumable uploads. Like upload_store, this directory can be hashed.

upload_state_store /var/upload/state 1;

upload_store_access

Syntax: upload_store_access mode
Default: user:rw
Context: server, location

Sets the access permissions for uploaded files.

# Owner read/write only (default)
upload_store_access user:rw;

# Owner read/write, group read
upload_store_access user:rw group:r;

# Owner read/write, group read, others read
upload_store_access user:rw group:r all:r;

upload_set_form_field

Syntax: upload_set_form_field name value
Default: β€”
Context: server, location

Generates form fields for each uploaded file. Both name and value can contain special variables:

Variable Description
$upload_field_name Original form field name
$upload_content_type Content-Type of the uploaded file
$upload_file_name Original filename (path components stripped)
$upload_tmp_path Path where the file is stored
$upload_file_number Ordinal number of the file in the request
upload_set_form_field $upload_field_name.name "$upload_file_name";
upload_set_form_field $upload_field_name.content_type "$upload_content_type";
upload_set_form_field $upload_field_name.path "$upload_tmp_path";

upload_aggregate_form_field

Syntax: upload_aggregate_form_field name value
Default: β€”
Context: server, location

Similar to upload_set_form_field, but for fields requiring complete file upload first (checksums, file size). Available variables:

Variable Description
$upload_file_md5 MD5 checksum (lowercase)
$upload_file_md5_uc MD5 checksum (uppercase)
$upload_file_sha1 SHA1 checksum (lowercase)
$upload_file_sha1_uc SHA1 checksum (uppercase)
$upload_file_sha256 SHA256 checksum (lowercase)
$upload_file_sha256_uc SHA256 checksum (uppercase)
$upload_file_sha512 SHA512 checksum (lowercase)
$upload_file_sha512_uc SHA512 checksum (uppercase)
$upload_file_crc32 CRC32 checksum (hexadecimal)
$upload_file_size File size in bytes
upload_aggregate_form_field $upload_field_name.md5 "$upload_file_md5";
upload_aggregate_form_field $upload_field_name.size "$upload_file_size";
upload_aggregate_form_field $upload_field_name.sha256 "$upload_file_sha256";

Note: Computing checksums requires additional CPU resources. Only enable the checksums you need.

upload_pass_form_field

Syntax: upload_pass_form_field regex
Default: β€”
Context: server, location

Specifies a regex pattern for form field names to pass through to the backend. Without this directive, non-file fields are discarded.

# Pass specific fields
upload_pass_form_field "^submit$|^description$|^title$";

# Pass all fields
upload_pass_form_field "^.*$";

Note: There is a known bug in the upstream module where this directive may not work correctly with certain regex patterns. A fix is available but not yet merged. If you need to pass non-file form fields, consider using query string parameters with upload_pass_args on as a workaround.

upload_cleanup

Syntax: upload_cleanup status [status ...]
Default: β€”
Context: server, location

Specifies HTTP status codes that trigger automatic cleanup of uploaded files. Status codes must be between 200 and 599. Ranges use a dash.

upload_cleanup 400 404 499 500-505;

upload_buffer_size

Syntax: upload_buffer_size size
Default: System page size (typically 4096)
Context: server, location

Sets the buffer size for writing files to disk. Larger buffers reduce system call overhead but increase memory usage.

upload_buffer_size 128k;

upload_max_part_header_len

Syntax: upload_max_part_header_len size
Default: 512
Context: server, location

Maximum length of each part’s header. Increase this for very long filenames.

upload_max_part_header_len 1024;

upload_max_file_size

Syntax: upload_max_file_size size
Default: 0 (unlimited)
Context: main, server, location

Sets a soft limit on individual file size. Files exceeding this limit are skipped, but the request continues. For a hard limit, use client_max_body_size.

upload_max_file_size 100m;

upload_max_output_body_len

Syntax: upload_max_output_body_len size
Default: 100k
Context: main, server, location

Maximum size of the modified request body passed to the backend. Returns HTTP 413 if exceeded.

upload_max_output_body_len 256k;

upload_limit_rate

Syntax: upload_limit_rate rate
Default: 0 (unlimited)
Context: main, server, location

Limits upload speed in bytes per second. Useful for preventing bandwidth saturation.

upload_limit_rate 1m;

upload_pass_args

Syntax: upload_pass_args on | off
Default: off
Context: main, server, location

When enabled, query string arguments are forwarded to the upload_pass location.

upload_pass_args on;

upload_tame_arrays

Syntax: upload_tame_arrays on | off
Default: off
Context: main, server, location

Removes square brackets from field names. Enable this for PHP array notation (e.g., files[]).

upload_tame_arrays on;

upload_resumable

Syntax: upload_resumable on | off
Default: off
Context: main, server, location

Enables resumable upload support. Clients can upload files in chunks and resume interrupted uploads.

upload_resumable on;

upload_add_header

Syntax: upload_add_header name value
Default: β€”
Context: server, location

Adds custom headers to the response. Both name and value can contain variables.

upload_add_header X-Upload-File-Count "$upload_file_number";

upload_empty_field_names

Syntax: upload_empty_field_names on | off
Default: off
Context: main, server, location

Allows file fields with empty names to be processed.

upload_empty_field_names on;

Complete NGINX Upload Module Configuration Example

Here is a production-ready configuration for the NGINX upload module:

# Load the upload module
load_module modules/ngx_http_upload_module.so;

http {
    # Increase client body size limit for large uploads
    client_max_body_size 500m;

    server {
        listen 80;
        server_name upload.example.com;

        # Upload endpoint using the NGINX upload module
        location /upload {
            # Pass modified request to backend
            upload_pass @backend;

            # Store files with hashed subdirectories
            upload_store /var/upload 1 2;

            # Set file permissions
            upload_store_access user:rw group:r;

            # Generate metadata fields for backend
            upload_set_form_field "${upload_field_name}_name" "$upload_file_name";
            upload_set_form_field "${upload_field_name}_content_type" "$upload_content_type";
            upload_set_form_field "${upload_field_name}_path" "$upload_tmp_path";

            # Include file checksums and size
            upload_aggregate_form_field "${upload_field_name}_md5" "$upload_file_md5";
            upload_aggregate_form_field "${upload_field_name}_sha256" "$upload_file_sha256";
            upload_aggregate_form_field "${upload_field_name}_size" "$upload_file_size";

            # Clean up files on backend errors
            upload_cleanup 400 404 499 500-505;

            # Forward query parameters (use this for metadata instead of form fields)
            upload_pass_args on;
        }

        # Backend handler
        location @backend {
            proxy_pass http://127.0.0.1:8080;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
}

PHP Backend Example

Here is a complete PHP backend script that processes uploads from the NGINX upload module. This example demonstrates how to handle file metadata, move files to a permanent location, and return a JSON response.

PHP Upload Handler

<?php
/**
 * NGINX Upload Module Backend Handler
 * 
 * This script processes file uploads handled by the NGINX upload module.
 * NGINX stores files and passes metadata; this script moves files to
 * their final destination and returns upload results.
 */

header('Content-Type: application/json');

// Configuration
$uploadDir = '/var/www/uploads';  // Final destination for uploaded files
$allowedTypes = ['image/jpeg', 'image/png', 'image/gif', 'application/pdf'];
$maxFileSize = 50 * 1024 * 1024;  // 50 MB

// Ensure upload directory exists
if (!is_dir($uploadDir)) {
    mkdir($uploadDir, 0755, true);
}

/**
 * Extract file metadata from POST data.
 * 
 * NGINX upload module sends fields like:
 *   fieldname_name, fieldname_path, fieldname_size, etc.
 * 
 * PHP converts dots to underscores, so field.name becomes field_name.
 */
function extractFileMetadata(array $postData): array {
    $files = [];
    $suffixes = ['_name', '_content_type', '_path', '_md5', '_sha256', '_size'];

    foreach ($postData as $key => $value) {
        foreach ($suffixes as $suffix) {
            if (str_ends_with($key, $suffix)) {
                $fieldName = substr($key, 0, -strlen($suffix));
                $property = ltrim($suffix, '_');
                $files[$fieldName][$property] = $value;
                break;
            }
        }
    }

    return $files;
}

/**
 * Validate and move an uploaded file to its final destination.
 */
function processUploadedFile(array $fileData, string $uploadDir, array $allowedTypes, int $maxFileSize): array {
    $result = [
        'success' => false,
        'original_name' => $fileData['name'] ?? 'unknown',
        'error' => null,
    ];

    // Validate required fields
    if (empty($fileData['path']) || empty($fileData['name'])) {
        $result['error'] = 'Missing file path or name';
        return $result;
    }

    $tempPath = $fileData['path'];

    // Verify temp file exists
    if (!file_exists($tempPath)) {
        $result['error'] = 'Temporary file not found';
        return $result;
    }

    // Validate file size
    $fileSize = (int)($fileData['size'] ?? filesize($tempPath));
    if ($fileSize > $maxFileSize) {
        unlink($tempPath);
        $result['error'] = 'File exceeds maximum size';
        return $result;
    }

    // Validate content type
    $contentType = $fileData['content_type'] ?? 'application/octet-stream';
    if (!in_array($contentType, $allowedTypes, true)) {
        unlink($tempPath);
        $result['error'] = 'File type not allowed: ' . $contentType;
        return $result;
    }

    // Generate safe filename
    $extension = pathinfo($fileData['name'], PATHINFO_EXTENSION);
    $safeExtension = preg_replace('/[^a-zA-Z0-9]/', '', $extension);
    $uniqueName = uniqid('upload_', true) . '.' . $safeExtension;
    $finalPath = $uploadDir . '/' . $uniqueName;

    // Move file to final destination
    if (!rename($tempPath, $finalPath)) {
        // Try copy if rename fails (cross-filesystem)
        if (!copy($tempPath, $finalPath)) {
            $result['error'] = 'Failed to move file';
            return $result;
        }
        unlink($tempPath);
    }

    // Set proper permissions
    chmod($finalPath, 0644);

    $result['success'] = true;
    $result['final_path'] = $finalPath;
    $result['filename'] = $uniqueName;
    $result['size'] = $fileSize;
    $result['content_type'] = $contentType;
    $result['md5'] = $fileData['md5'] ?? null;
    $result['sha256'] = $fileData['sha256'] ?? null;

    return $result;
}

// Process the request
$response = [
    'success' => true,
    'files' => [],
    'errors' => [],
];

// Extract file metadata from POST data
$files = extractFileMetadata($_POST);

if (empty($files)) {
    $response['success'] = false;
    $response['errors'][] = 'No files received';
    echo json_encode($response, JSON_PRETTY_PRINT);
    exit;
}

// Process each uploaded file
foreach ($files as $fieldName => $fileData) {
    $result = processUploadedFile($fileData, $uploadDir, $allowedTypes, $maxFileSize);
    $result['field_name'] = $fieldName;

    if ($result['success']) {
        $response['files'][] = $result;
    } else {
        $response['errors'][] = $result;
        $response['success'] = false;
    }
}

echo json_encode($response, JSON_PRETTY_PRINT);

NGINX Configuration for PHP Backend

location /upload {
    upload_pass @php_backend;
    upload_store /var/upload 1 2;
    upload_store_access user:rw group:r all:r;

    # Use underscores instead of dots for PHP compatibility
    upload_set_form_field "${upload_field_name}_name" "$upload_file_name";
    upload_set_form_field "${upload_field_name}_content_type" "$upload_content_type";
    upload_set_form_field "${upload_field_name}_path" "$upload_tmp_path";

    upload_aggregate_form_field "${upload_field_name}_md5" "$upload_file_md5";
    upload_aggregate_form_field "${upload_field_name}_sha256" "$upload_file_sha256";
    upload_aggregate_form_field "${upload_field_name}_size" "$upload_file_size";

    upload_cleanup 400 404 499 500-505;
    upload_pass_args on;
}

location @php_backend {
    fastcgi_pass unix:/run/php-fpm/www.sock;
    fastcgi_param SCRIPT_FILENAME /var/www/html/upload-handler.php;
    fastcgi_param REQUEST_METHOD POST;
    fastcgi_param CONTENT_TYPE $content_type;
    fastcgi_param CONTENT_LENGTH $content_length;
    fastcgi_param QUERY_STRING $query_string;
    include fastcgi_params;
}

Testing the PHP Backend

Test the upload endpoint with curl:

curl -X POST \
  -F "document=@/path/to/test.pdf" \
  -F "image=@/path/to/photo.jpg" \
  http://upload.example.com/upload

Expected response:

{
    "success": true,
    "files": [
        {
            "success": true,
            "original_name": "test.pdf",
            "field_name": "document",
            "final_path": "/var/www/uploads/upload_65f1a2b3c4d5e.pdf",
            "filename": "upload_65f1a2b3c4d5e.pdf",
            "size": 102400,
            "content_type": "application/pdf",
            "md5": "d41d8cd98f00b204e9800998ecf8427e",
            "sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4..."
        },
        {
            "success": true,
            "original_name": "photo.jpg",
            "field_name": "image",
            "final_path": "/var/www/uploads/upload_65f1a2b3c4d5f.jpg",
            "filename": "upload_65f1a2b3c4d5f.jpg",
            "size": 204800,
            "content_type": "image/jpeg",
            "md5": "098f6bcd4621d373cade4e832627b4f6",
            "sha256": "9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b..."
        }
    ],
    "errors": []
}

Resumable Uploads with the NGINX Upload Module

The NGINX upload module supports resumable uploads. This feature allows large files to be uploaded in chunks. It is essential for unreliable network connections.

Enabling Resumable Uploads

location /upload {
    upload_pass @backend;
    upload_store /var/upload 1;
    upload_state_store /var/upload/state 1;
    upload_resumable on;

    upload_set_form_field $upload_field_name.name "$upload_file_name";
    upload_set_form_field $upload_field_name.path "$upload_tmp_path";
}

How Resumable Uploads Work

The resumable upload protocol splits files into segments transmitted in separate HTTP requests:

  1. Client generates a unique session ID
  2. Client sends file segments with X-Content-Range headers
  3. Server stores segments and tracks progress in the state store
  4. Server responds with 201 Created until complete
  5. When all segments arrive, server returns 200 OK

Example request for the first segment:

POST /upload HTTP/1.1
Host: example.com
Content-Length: 51201
Content-Type: application/octet-stream
Content-Disposition: attachment; filename="large-file.zip"
X-Content-Range: bytes 0-51200/511920
Session-ID: abc123

<bytes 0-51200>

Server response:

HTTP/1.1 201 Created
Range: 0-51200/511920

0-51200/511920

Testing Your NGINX Upload Module Configuration

Basic Upload Test

Create a simple HTML form to test uploads:

<!DOCTYPE html>
<html>
<head><title>Upload Test</title></head>
<body>
    <form method="POST" enctype="multipart/form-data" action="/upload">
        <input type="file" name="document">
        <button type="submit">Upload</button>
    </form>
</body>
</html>

Command-Line Testing with curl

Test uploading a file with curl:

curl -X POST \
  -F "document=@/path/to/testfile.pdf" \
  http://upload.example.com/upload

Your backend will receive a request with fields like:

document_name=testfile.pdf
document_content_type=application/pdf
document_path=/var/upload/3/00/0000000003
document_md5=d41d8cd98f00b204e9800998ecf8427e
document_sha256=e3b0c44298fc1c149afbf4c8996fb924...
document_size=12345

Verifying File Storage

Check that files are being stored correctly:

# List uploaded files
find /var/upload -type f -name "0*"

# Check file permissions
stat /var/upload/3/00/0000000003

Performance Considerations

Buffer Size Tuning

The upload_buffer_size directive controls how much data is buffered before writing to disk. Increase it for better throughput on fast storage:

# Larger buffer for SSD storage
upload_buffer_size 256k;

Consider your expected concurrent upload count when adjusting this value.

Directory Hashing

For systems expecting many uploads, always use hashed directories:

upload_store /var/upload 1 2;

This distributes files across 1,000 subdirectories (10 Γ— 100). It prevents filesystem performance issues from too many files in one directory.

Checksum Computation

Computing checksums adds CPU overhead. Only enable the checksums you need:

# Only MD5 for basic integrity checking
upload_aggregate_form_field $upload_field_name.md5 "$upload_file_md5";

# Add SHA256 only if required for security verification
upload_aggregate_form_field $upload_field_name.sha256 "$upload_file_sha256";

Avoid enabling SHA512 unless specifically required, as it is more computationally expensive.

Rate Limiting

For public upload endpoints, consider rate limiting to prevent abuse. Combine the NGINX upload module rate limiting with the map directive for conditional limits:

# Limit per-connection upload speed
upload_limit_rate 5m;

# Combine with NGINX rate limiting
limit_req_zone $binary_remote_addr zone=upload:10m rate=1r/s;

location /upload {
    limit_req zone=upload burst=5;
    upload_pass @backend;
    # ... other directives
}

Security Best Practices

Restrict Upload Location

Never allow uploads to web-accessible directories. Store files outside the web root:

# Good: Upload to separate directory outside web root
upload_store /var/upload 1;

# Bad: Upload to web-accessible directory
upload_store /var/www/html/uploads 1;

Validate File Types in Backend

The NGINX upload module does not validate file contents. Always verify file types in your backend using tools like ModSecurity or application-level validation:

# Python example using python-magic
import magic

mime = magic.Magic(mime=True)
file_type = mime.from_file(uploaded_file_path)

allowed_types = ['image/jpeg', 'image/png', 'application/pdf']
if file_type not in allowed_types:
    os.remove(uploaded_file_path)
    raise ValidationError('Invalid file type')

Set Appropriate File Permissions

Restrict file permissions to prevent unauthorized access:

# Recommended: Owner read/write only
upload_store_access user:rw;

Enable Cleanup for Errors

Always configure cleanup to prevent disk exhaustion from failed uploads:

upload_cleanup 400-499 500-599;

Limit Request Size

Set appropriate limits for your use case:

# Hard limit on total request size
client_max_body_size 100m;

# Soft limit on individual file size
upload_max_file_size 50m;

Troubleshooting Common Issues

Error: “upload_store directive not set”

You must specify upload_store in every location using upload_pass:

location /upload {
    upload_store /var/upload;  # Required
    upload_pass @backend;
}

Files Not Appearing in upload_store

Check directory permissions:

# Directory must be writable by nginx user
chown -R nginx:nginx /var/upload
chmod 755 /var/upload

Backend Not Receiving File Metadata

Ensure you have configured upload_set_form_field:

upload_set_form_field $upload_field_name.path "$upload_tmp_path";

“Request Entity Too Large” Error

Increase client_max_body_size:

client_max_body_size 500m;

Non-File Form Fields Not Received

There is a known bug in the upstream module where upload_pass_form_field may not work correctly. As a workaround, pass metadata via query string parameters using upload_pass_args on:

upload_pass_args on;

Then submit forms with metadata in the URL: /upload?title=MyDocument&author=JohnDoe

Hashed Directory Not Found

Create all required subdirectories before starting NGINX:

# For upload_store /var/upload 1
for dir in $(seq 0 9); do
    mkdir -p /var/upload/$dir
done

# For upload_store /var/upload 1 2
for i in $(seq 0 9); do
    for j in $(seq -w 00 99); do
        mkdir -p /var/upload/$i/$j
    done
done

Conclusion

The NGINX upload module provides an efficient solution for handling file uploads on high-traffic servers. By offloading file processing from your backend to NGINX, you can improve upload performance and reduce memory usage. The module supports resumable uploads for large files.

Key takeaways for system administrators:

  • Install via the GetPageSpeed repository for easy updates
  • Always use hashed directories for file storage
  • Enable only the checksums you need
  • Configure cleanup to prevent disk exhaustion
  • Validate file contents in your backend application
  • Use query string parameters for metadata due to a known issue with form field passthrough

For more information, visit the official GitHub repository.

D

Danila Vershinin

Founder & Lead Engineer

NGINX configuration and optimizationLinux system administrationWeb performance engineering

10+ years NGINX experience β€’ Maintainer of GetPageSpeed RPM repository β€’ Contributor to open-source NGINX modules

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

This site uses Akismet to reduce spam. Learn how your comment data is processed.