Quantcast
Channel: Nginx Forum - How to...
Viewing all 2931 articles
Browse latest View live

Static files slooooooow (no replies)

$
0
0
Hi,
I have recently moved my site from shared hosting to VDS with Nginx. The performance of every page that does not contain heavy elements is very obvious, pages do load much faster. However, there is something wrong with the static files. Starting with the ~170k font file: it takes few seconds for the font to "apply" when I visit the site in a fresh anonymous tab. And it is way more horrible with bigger files: pdf files take ages to load.

This Pingdom report ( https://tools.pingdom.com/#!/dWuIkE/https://www.bykasov.com/2016/oda-sobakam-severa ) shows that there are several attempts to access the pdf file – ?

While on shared, the average text page load was slower, loading these static files would take far less time (even on pages with several pdf's at once, like category pages).

Apparently there is something wrong with my configuration and I would appreciate any help.

My nginx.conf:

# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
worker_connections 1024;
}

http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;

include /etc/nginx/mime.types;
default_type application/octet-stream;

server_names_hash_bucket_size 64;

# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
charset utf-8;

server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;

# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
include /etc/nginx/hhvm.conf;

location / {
}

error_page 404 /404.html;
location = /40x.html {
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}

# Settings for a TLS enabled server.
#
# server {
# listen 443 ssl http2 default_server;
# listen [::]:443 ssl http2 default_server;
# server_name _;
# root /usr/share/nginx/html;
#
# ssl_certificate "/etc/pki/nginx/server.crt";
# ssl_certificate_key "/etc/pki/nginx/private/server.key";
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 10m;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
#
# # Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
#
# location / {
# }
#
# error_page 404 /404.html;
# location = /40x.html {
# }
#
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# }
# }

gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types image/svg+xml text/plain text/xml text/css text/javascript application/xml application/xhtml+xml application/rss+xml application/javascript application/x-javascript application/x-font-ttf application/vnd.ms-fontobject font/opentype font/ttf font/eot font/otf;

}


My site conf file:

server {
listen 80;
server_name bykasov.com www.bykasov.com;
return 301 https://www.bykasov.com$request_uri;
}

server {
listen 443 ssl http2;
server_name bykasov.com www.bykasov.com;

ssl_certificate /etc/letsencrypt/live/bykasov.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/bykasov.com/privkey.pem;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;

ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;

ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
access_log (.....removed....);

# The rest of your server block
root (....removed....);
index index.php index.html index.htm;

directio 300k;
#output_buffers 2 1M;

#sendfile on;
#sendfile_max_chunk 256k;

location ^~ /.well-known/acme-challenge/ {
}

location / {
try_files $uri $uri/ /index.php?$args;
}

error_page 404 /404.html;
location = /50x.html {
root /(...removed....);
}

location ~* /wp-includes/.*.php$ {
deny all;
access_log off;
log_not_found off;
}

location ~* /wp-content/.*.php$ {
deny all;
access_log off;
log_not_found off;
}

location ~ ^/(wp-config\.php) {
deny all;
access_log off;
log_not_found off;
}

location ~ ^/(wp-login\.php) {
# allow (.....removed.....);
deny all;
}

location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/hhvm/hhvm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}


location ~ \.(js|css|png|jpg|jpeg|gif|ico|html|woff|woff2|ttf|svg|eot|otf)$ {
add_header "Access-Control-Allow-Origin" "*";
expires 1M;
access_log off;
add_header Cache-Control "public";
}

}



The directio-output buffers-sendfile part is something that I've tried but could not see it making any difference.

Is an IP address valid for server_name? (2 replies)

$
0
0
So I will be using nginx as a reverse proxy. I do not have a domain name for my server yet. I am in development.

Can I use the IP address such as the following in /etc/nginx/sites-available/default:

server {
listen 80;

server_name 1.2.3.4; //Obvious fake IP.

location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}

Thanks in advance for your reply!

Ray

How to redirect Nginx port 80 to 8080 Tomcat and make webapp main page? (no replies)

$
0
0
Hello,

I have site (its name is not real but something like) http://example.co
Tomcat it installed there on port 8080 with web app. http://example.co:8080/web_app/

The question is which Nginx server in nginx.conf configuration should i use to get web_app as http://example.co main page without port 8080 in URL and any external slashes?

I tried this manual:

https://stackoverflow.com/questions/19866203/nginx-configuration-to-pass-site-directly-to-tomcat-webapp-with-context

and also this

https://www.digitalocean.com/community/questions/how-to-change-the-default-nginx-page-to-my-web-application-home-page

but it does not work for me. It just redirects to 8080 port but i need to web_app page which should be opened as a main site page http://example.co without web_app suffix.

Prevent json truncation from large POST requests (no replies)

$
0
0
Hi there,
I'm having an issue with Nginx and sending large POST requests. I have a load balancer that forwards requests to an EC2 instance that has Nginx running on port 80 and my application running on port 9928. See attached image for an architecture diagram, I can tell that submitting directly to my application is fine while submitting through nginx causes the error, which makes me think this is an nginx issue.

When I submit moderate sized POST requests, everything is fine, but when I increase the size of the POST request to multiple megabytes, my application has the error "message is too long" and that the json is invalid because it has an unexpected end of input. I think my POST request is getting truncated somewhere between nginx and my application server because my $request_body when I save it as a log, looks fine.

Here is a snippet of my debug nginx log:

5 http client request body recv 8949
*5 http client request body rest 5239409
*5 recv: fd:9 -1 of 3693359
*5 recv() not ready (11: Resource temporarily unavailable)
*5 http client request body recv -2
*5 http client request body rest 5239409
*5 event timer: 9, old: 1515520710736, new: 1515520710905
*5 post event 000055A779AE3B40
*5 delete posted event 000055A779AE3B40
*5 http run request: "/mps/updateFunction?mode=async"

From Line 1796 of this Failed NGINX Log:
https://gist.github.com/CaptainChemist/b1562b40b4a2da89bf8ed452e7cac4d4

By comparison, this is a Successful NGINX Log:
https://gist.github.com/CaptainChemist/ab920a953ead13d7244a657e1521ab71

Here are my configuration files, clearly they are kind of a mess but thanks so much for your help!

**nginx.conf**

user www-data;
worker_processes auto;
pid /run/nginx.pid;

events {
worker_connections 768;
}

http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;

include /etc/nginx/mime.types;
default_type application/octet-stream;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;

access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log debug;

include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}

**sites-enabled/default**

upstream processingServer {
server 127.0.0.1:9928;
keepalive 25600;
}

log_format postdata $request_body;
log_format upstreamlog $request_body;

server {
listen 80;
client_max_body_size 5000M;
client_body_buffer_size 5000M;
client_header_timeout 3m;
client_body_timeout 3m;
send_timeout 3m;
keepalive_timeout 100000;
location / {
access_log /var/log/nginx/postdata.log postdata;
access_log /var/log/nginx/upstream_postdata.log upstreamlog;

add_header X-external-IP 54.89.000.000;
proxy_pass http://processingServer;
proxy_send_timeout 86400s;
proxy_read_timeout 300s;
proxy_http_version 1.1;
}
}

Is sendfile option compatibile with TLS? (no replies)

$
0
0
Hello!

I'm stuck to understand how nginx handles encrypting messages with TLS protocol, when sendfile is ON.

The premise of sendfile is not to use user space, just redirect given file straight to the socket. So, it seems that there is no way to encrypt the file in the user space, using sendfile alone.
Is Nginx using some other solution, as described by folks from netflix? ( https://people.freebsd.org/~rrs/asiabsd_2015_tls.pdf ), or sendfile is always off, in HTTPS communication?

ssl_ciphers explained (no replies)

$
0
0
Hi,

This may sound like a stupid questions, but I have not found any clear answers to it.
Could someone explain the ssl_ciphers options in nginx?

For example ''ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256'.

I have read that you should disable RSA due to the ROBOT vulnerability (https://robotattack.org/). Does that mean that I should remove all the ciphers above that contains RSA?

And does for example the cipher 'ECDHE-ECDSA-CHACHA20-POLY1305' mean in which order messages are encrypted?

Thanks for any answers!

multiple nginx error log (no replies)

$
0
0
Hi ,

I have a nginx 1.4.6 running on Ubuntu 14.04.5 LTS and recently i have changed the path of the access_log and error_log to a diff path (which has bigger disk).
However after changing the path in nginx.conf i still see the error log being written to the regular log path /var/log/nginx/error.log as well as the new path, - but with time difference and content. i am seeking advice from this forum for this issue. appreciate much. thanks..

[standard log path]
root@ip-172-31-13-74:/etc/nginx# ll /var/log/nginx/error.log
-rw-r--r-- 1 www-data root 75097175 Jan 11 15:40 /var/log/nginx/error.log

[new log path]
root@ip-172-31-13-74:/etc/nginx# ll /log/nginx/error.log
-rw-r----- 1 www-data adm 3716 Jan 11 15:29 /log/nginx/error.log

[nginx.conf setting for the logs]
access_log /log/nginx/access.log;
error_log /log/nginx/error.log;

Filtering by content type (no replies)

$
0
0
Hi everyone, I've been looking for this answer but I cound't find a straight answer.

I'm hardening my API REST, and one of the points I have to implement says: "Reject requests containing unexpected or missing content type headers with HTTP response status 406 Unacceptable or 415 Unsupported Media Type" I've been looking to do the filtering of content types, but I don't have any clue about how to solve this, and the other thing is that I haven't found how to send a specific http code in that scenario.

Can anyone guide me to how to solve this?

Thank you very much in advance.

Kindly regrads,
Rodrigoqwq

Nginx infront of WOWZA streaming server (no replies)

$
0
0
We have successfully set up Nginx ssl offloading server in front of our WOWza streaming server. HLS streams work without issues.

The problem comes in when trying to get streams to play on android devices. Is there a way to proxy the RTMPS like we do with the HLS traffic from the Nginx. or is there a better way to handle this traffic back to the wowza server?

TIA
Josh

Non-caching aliases (no replies)

$
0
0
Here is its nginx config:

server {
server_name example.com m.example.com www.example.com www.m.example.com;
charset off;
disable_symlinks if_not_owner from=$root_path;
index index.html index.php;
root $root_path/$subdomain;
set $root_path /var/www/examplecom/data/www;
set $subdomain example.com;
ssi on;
access_log /var/www/httpd-logs/example.com.access.log ;
error_log /var/www/httpd-logs/example.com.error.log notice;
include /etc/nginx/vhosts-includes/*.conf;
include /etc/nginx/vhosts-resources/example.com/*.conf;
location / {
location ~* ^.+\.(jpg|jpeg|gif|png|svg|js|css|mp3|ogg|mpe?g|avi|zip|gz|bz2?|rar|swf|woff|ttf|otf||woff2|eot)$ {
try_files $uri $uri/ @fallback;
expires 6M;
}
location / {
try_files /does_not_exists @fallback;
}
location ~ [^/]\.ph(p\d*|tml)$ {
try_files /does_not_exists @fallback;
}
}
location @fallback {
error_log /dev/null crit;
proxy_pass http://127.0.0.1:8080;
proxy_redirect http://127.0.0.1:8080 /;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
access_log off ;
}
if ($host ~* ^((.*).example.com)$) {
set $subdomain $1;
}
gzip on;
gzip_comp_level 5;
gzip_disable "msie6";
gzip_types text/plain text/css image/jpeg image/png image/gif text/xml application/xml application/xhtml+xml text/javascript application/x-javascript application/javascript;
listen 8.1.35.82:80;
}


The problem is that the files on aliases m.example.com and www.m.example.com are not cached, but cached on example.com and www.example.com

What can be the problem of non-caching aliases m.example.com?

Proxy Cache - How to Always Return Stale Content (no replies)

$
0
0
Hello,

My use case is simple:
1) Serve static files coming from an upstream (using proxy_pass)
2) If the upstream returns a non-200/304 response - ALWAYS serve the file from cache, even if its expired/stale.

For the most part, 'proxy_cache_use_stale' does the trick.
However - it doesn't cover cases such as 401, 402, etc. (the full list it does support according to the docs: error | timeout | invalid_header | updating | http_500 | http_502 | http_503 | http_504 | http_403 | http_404 | http_429)

Is there a way I can achieve this? how can I force the file being served cache for the cases not covered by the 'proxy_cache_use_stale' directive? (even if it requires using lua)

Configure nginx to forward port from 3000 to 80 (no replies)

$
0
0
http {
server {
listen 80;

location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Real-Port $server_port;
proxy_set_header X-Real-Scheme $scheme;
}
}

I have an NodeJS/Express app listening on port 3000. How do I redirect requests to `<ip-address>:3000` to `<ip-address>:80`?

The above configuration did not work. (shows `404 Not Found
nginx/1.10.3 (Ubuntu)` when I go to port 80.)

Nginx configuration for multiple servers in folders (no replies)

$
0
0
I have just installed Nginx on my Raspberry and everything works smoothly.

I have created a basic index.html file and when I connect to my raspi, the web page is displayed correctly. Also php works.

I would like now to create a couple of projects as subdirectory of the /var/www default directory, project automation (/var/www/automation) and project information (/var/www/information), each 3 independent from the other.

So when connecting with the raspiIP (192.168.0.1), I would like to display the main or default website.

Then when connecting to raspiIP/automation, I would like to display the site dedicated to the automation and finally when connecting to rapiIP/information, I would like to display a third web site.

How can I configure nginx to achieve that?

I have tried what explained about Server Blocks (Virtual Hosts) but I have gone nowhere.

Can anyone please help me? Thanks, daniele

Need a developer badly (no replies)

$
0
0
Hey there,
am not a developer or i have least knowledge about it. But am in very bad need, i want to mirror the "https://google.com/flights" in my blog. But as google had blocked the x-frame options for cross origin domains, so i can't do it by the iframe.

I googled it and found "reverse proxy" by which we can project the others website in our website. I knew one developer and he installed via reverse proxy on ngnix but its not working fine, he installed reverse proxy of google.com/flights on "booking.xyz.com". And i want to mirror the booking.xyz.com in xyz.com which is the actual blog via iframe.

But when am inserting the iframe of booking.xyz.com in xyz.com (although same origin) its not loading. Moreover booking.xyz.com is not working fine on mobile version (for desktop version its good), dynamic content not loading on clicking.

So i would like to know the proper way of installing reverse proxy or vice versa way to project the google.com/flights in my travel blog. Am ready to pay the developer if he can able to address the issue and successful install it and makes it working fine.

Looking for the answers

Nginx - Only handles exactly 500 request per second - How to increase the limit? (no replies)

$
0
0
worker_processes auto;
pid /run/nginx.pid;
worker_rlimit_nofile 100000;
error_log /var/log/nginx/error.log crit;
events {
worker_connections 4000;
multi_accept on;
use epoll;
}

http {
include /etc/nginx/mime.types;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
directio 4m;
types_hash_max_size 2048;

client_body_buffer_size 15K;
client_max_body_size 8m;

keepalive_timeout 20;
client_body_timeout 15;
client_header_timeout 15;
send_timeout 10;

open_file_cache max=5000 inactive=20s;
open_file_cache_valid 60s;
open_file_cache_min_uses 5;
open_file_cache_errors off;

gzip on;
gzip_comp_level 2;
gzip_min_length 1000;
gzip_proxied any;
gzip_types text/plain text/css application/json application/xjavascript text/xml application/xml application/xml+rss text/javascript;

access_log off;
log_not_found off;
include /etc/nginx/conf.d/*.conf;
}

The server has 8 cores and 32 gb ram.
The load is 0.05
But nginx is not able to handle more than 500 requests per second.
Please tell me how to increase the limit

Client certificate validation error handling (no replies)

$
0
0
We are using nginx as a reverse proxy to enable a client certificate authentication for our REST API endpoints. The config is as follows:

server {
listen 443 ssl;
ssl_certificate /Users/asedov/Documents/work/ssl/openssl-scripts/ca/certs/test-backend_crt.pem;
ssl_certificate_key /Users/asedov/Documents/work/ssl/openssl-scripts/ca/private/test-backend_key.pem;
ssl_client_certificate /Users/asedov/Documents/work/ssl/openssl-scripts/ca/certs/ca_crt.pem;

ssl_verify_client optional;
ssl_verify_depth 2;

server_name localhost;

proxy_set_header SSL_CLIENT_CERT $ssl_client_cert;

location / {
proxy_pass http://127.0.0.1:8088;
}
}

The idea is to get the certificate body in the SSL_CLIENT_CERT header if a client provides a certificate. It works fine while a provided certificate is valid. Otherwise, for example if the certificate is expired, nginx responds with 400 error and doesn't proxy_pass to our backend.

I'm looking for a way to change this behavior and handle the certificate verification error to still do a proxy_pass to our API but with the empty SSL_CLIENT_CERT header. So, basically, we need nginx verify provided certificates (if provided) and set the header only in case the certificate is provided and valid.

Is it possible?

Thank you in advance!

Reverse Proxy as a WAF? (1 reply)

$
0
0
1. Can someone give me some guidelines about configuring a WAF? I want to filter the HTTP traffic for a few sites, but I would like to have a separate server (Proxy) for WAF.

I think I just need Nginx Reverse Proxy with Naxsi or ModSecurity. As far as I know Cloudflare is using too. Why not using my own WAF instead of Cloudflare?

2. How many sites it's okay to put under a single proxy server WAF?

redirect to another port (no replies)

$
0
0
Hello,
I set up a shiny server and a nginx server, I would like that when we connect to the nginx server it redirects us to the shiny server.
To redirect to the shiny server I use the proxy-pass command but I get the error page : nginx error! The page you are looking for is temporarily unavailable. Please try again later.

Server shiny and nginx are on the same machine.

Can NGINX do content based redirection? (no replies)

$
0
0
I'd like to be able to us NGINX as the single point of entry and send the traffic off to different servers depending on the content of a SOAP/XML element, is that possible?

So the SOAP request might POST <colour> BLUE </colour> and it would reverse proxy to server 1 but if it was <colour> RED</colour> it would send it to server 2. I know you can do it with GET requests ($arg) but it has to be POST.

Is that possible? Any links for me?

Thanks!

Taking much time to load (no replies)

$
0
0
i dont know why but nginx is taking much time to load, i am just running a single wordpress website and it is taking 2 minutes to load a single page,
i have used but it never took more than 3 sec for loading a single page

Any solution to this ?

i read on my blogs and posts that nignx is much faster than apache but why it is taking too much time to just load a singlr page :(


Some details :
worker_connections 768
server_names_hash_bucket_size 64;

i have average visitors of 100 per day
Viewing all 2931 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>