Quantcast
Channel: Nginx Forum - How to...
Viewing all 2931 articles
Browse latest View live

How to setup routing via root (no replies)

$
0
0
Sorry for dumb question but is it possible to get a part of URL right after domain name via `server_name` variable?


server {
listen 80;
index index.php index.html;
server_name ~^localhost/(?<project>)/.+$;
root /var/www/$project/public;
...
}

The idea is to change root path in relation of it's folder structure:

1) "/var/www/project-one/public/index.php"
2) "/var/www/project-two/public/index.php"

So I would be able to reach entry point of each project by requests:

"http://localhost/project-one/" ->> "/var/www/project-one/public/"

"http://localhost/project-two/" ->> "/var/www/project-two/public/"

How to implement such behavior w/o using `alias`?

I can't use TLS1.3 only in my site,please help (no replies)

$
0
0
Hello all professional brothers & sisters

I use Raspberry 3B+ as my web server,installed ubuntu server 19.10 eoan (32bit armh),

my nginx use this ppa version (1.17.8) :
https://launchpad.net/~ondrej/+archive/ubuntu/nginx-mainline

my openssl is version 1.1.1c

I want my site only run on TLSv1.3,so in my config file,I set it as :

ssl_protocols TLSv1.3;

ssl_ciphers TLS-CHACHA20-POLY1305-SHA256:TLS-AES-256-GCM-SHA384;

but when I use nginx -t command test config file,it prompt me an error:
nginx: [emerg] SSL_CTX_set_cipher_list("TLS-CHACHA20-POLY1305-SHA256:TLS-AES-256-GCM-SHA384") failed (SSL: error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher match)

if I use TLSv1.2 TLSv1.3:

ssl_protocols TLSv1.2 TLSv1.3;

ssl_ciphers TLS-CHACHA20-POLY1305-SHA256:TLS-AES-256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE+AES128:RSA+AES128:ECDHE+AES256:RSA+AES256:ECDHE+3DES:RSA+3DES;

then use nginx -t command check again,it didn't prompt me any error:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

===========================================================

That's what I did to test,

A.)when I try these commands to check two TLS 1.3 ciphers,it show me those error:

openssl ciphers -v TLS-AES-256-GCM-SHA384
Error in cipher list
1992302608:error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher match:../ssl/ssl_lib.c:2549:

openssl ciphers -v TLS-CHACHA20-POLY1305-SHA256
Error in cipher list
1992876048:error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher match:../ssl/ssl_lib.c:2549:

B.)If I try another cipher,no error show:

openssl ciphers -v ECDHE-ECDSA-CHACHA20-POLY1305
TLS_AES_256_GCM_SHA384 TLSv1.3 Kx=any Au=any Enc=AESGCM(256) Mac=AEAD
TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 Kx=any Au=any Enc=CHACHA20/POLY1305(256) Mac=AEAD
TLS_AES_128_GCM_SHA256 TLSv1.3 Kx=any Au=any Enc=AESGCM(128) Mac=AEAD
ECDHE-ECDSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH Au=ECDSA Enc=CHACHA20/POLY1305(256) Mac=AEAD

C.)use this command, openssl show it support tls1.3

root@ubuntu:/etc/nginx/sites-available# openssl ciphers -v
TLS_AES_256_GCM_SHA384 TLSv1.3 Kx=any Au=any Enc=AESGCM(256) Mac=AEAD
TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 Kx=any Au=any Enc=CHACHA20/POLY1305(256) Mac=AEAD
TLS_AES_128_GCM_SHA256 TLSv1.3 Kx=any Au=any Enc=AESGCM(128) Mac=AEAD

D.)remove nginx & openSSL and reinstall it agin, many times, it doesn't work

what's wrong with my nginx & openssl?please help.

say thank you first.

beginner having trouble with A record (1 reply)

$
0
0
I have deployed a flask application in a ec2 instance here: http://52.49.186.217/
I have purchased a domain cats-vs-dogs.fun on name.com and created an A record as you can see in the picture.
I also restarted the Nginx server (just in case).
The static IP works fine but I cannot access the application going to www.cats-vs-dogs.fun, I get instead Nginx welcome page :(

What am I missing? It is my first time using Nginx

Set Cookie on Nginx (no replies)

$
0
0
Hi,

How and where do I setup the Set-Cookie on Nginx, link in this link : https://geekflare.com/httponly-secure-cookie-nginx/

Please assist me or guide me in the right direction.

Much appreciated.

nginx caching few upstreams on same server (no replies)

$
0
0
I'm trying to test to build a nginx server to cache few servers. My nginx conf is like that :
=====================================================
...
http {

upstream srv1 {
ip_hash;
server srv1.domain1.fr:443;
}

upstream srv2 {
ip_hash;
server srv2.domain2.fr:443;
}
...
proxy_redirect off;
proxy_http_version 1.1;
proxy_read_timeout 10s;
proxy_send_timeout 10s;
proxy_connect_timeout 10s;
proxy_cache_path /nginx/cache/cache_temp use_temp_path=off keys_zone=cache_temp:10m max_size=10g inactive=10m;
proxy_cache cache_temp;
proxy_cache_methods GET HEAD;
proxy_cache_key $uri;
proxy_cache_valid 404 3s;
proxy_cache_lock on;
proxy_cache_lock_age 5s;
proxy_cache_lock_timeout 1h;
proxy_ignore_headers Cache-Control;
proxy_ignore_headers Set-Cookie;
proxy_cache_use_stale updating;
...
#srv1
server {
listen 443 ssl http2;
server_name srv1.domain1.fr;

all ssl settings...

location ~* \.(gif|jpg|jpeg|png|wmv|avi|mpg|mpeg|mp4|htm|html|js|css|mp3|swf|ico|flv|woff|woff2|ttf|svg)$ {
proxy_cache_valid 12h;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
add_header X-Cache $upstream_cache_status;
proxy_pass https://srv1;
}

location / {
proxy_cache_valid 12h;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
add_header X-Cache $upstream_cache_status;
proxy_pass https://srv1;
}
}

#srv2
server {
listen 443 ssl http2;
server_name srv2.domain2.fr;

all ssl settings...

location ~* \.(gif|jpg|jpeg|png|wmv|avi|mpg|mpeg|mp4|htm|html|js|css|mp3|swf|ico|flv|woff|woff2|ttf|svg)$ {
proxy_cache_valid 12h;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
add_header X-Cache $upstream_cache_status;
proxy_pass https://srv2;
}

location / {
proxy_cache_valid 12h;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
add_header X-Cache $upstream_cache_status;
proxy_pass https://srv2;
}
}
====================================================

so in my dns, I put the same IP for srv1.domain1.fr and srv2.domain2.fr that works well but when I switch between both, issue occured : cache is the same so I try to find a way to get separated cache

any idea ? thanks

nginx reverse proxy certificate (no replies)

$
0
0
Hello

this is my topology:
client(my laptop)--https-->reverse proxy(vm1)--https-->upstream(10.X.X.X vm2)
client (public ip 1.2.3.4)
reverse proxy (public ip 5.6.7.8 , nginx, trusted certificates sign by globalsign)
upstream (private ip 10.X.X.X , IIS8, certificates signed by my own CA)

I can access from laptop to upstream web application but, browser show me untrusted certificate.
Is it possible to proxy interduce like upstream and instead upstream certificate client validate proxy certificate?

With regards
Boris

--without-http_charset_module - ngx_http_charset_module (no replies)

$
0
0
Hi all;

I am running Nginx which was installed from an Ubuntu OS package the challenge now is ngx_http_charset_module is now required. When I run nginx -t I do not see the module listed meaning its not enable/installed.

1. How do I add/enable ngx_http_charset_module on an Ubuntu OS package installed Nginx?

2. Some read ups shows that the module is --without-http_charset_module in source based installation. If so how can i enable same under such installation.

I will be glad if I get assistance in both scenarios to enable me to use any of these installation choices.

Regards;

AMC

Performance Problem with Proxy Buffering Enabled (no replies)

$
0
0
Hello,

I hope everyone is doing well and staying healthy!!

I recently found this forum while investigating a performance issue I'm having when proxy buffering is enabled. I've been looking and trying different solutions but none of them are working for me, so I hope your wisdom can help me to understand and figure out what the issue is.

Here is a description of my environment and the problem that I'm having. I have two servers working as a reverse proxy, one server only responds to requests on port 80, while the other one serves SSL certificates on port 443.

Server A:
Listens to port 80
nginx version: openresty/1.13.6.2
Rhel 6.10
Physical Memory: 64GB
CPUs: 55 cores @ 2.00GHz

Server B:
Listens to port 443
nginx version: openresty/1.13.6.2
Rhel 6.10
Physical Memory: 128GB
CPUs: 55 cores @ 2.00GHz

I enabled caching and noticed some issues with the buffers, so I tunned up the parameters in the SSL server and started with this configuration:

proxy_buffering on;
proxy_temp_path /proxy_buffer/tmp 1 2;
proxy_buffer_size 64k;
proxy_buffers 64 4k;
proxy_busy_buffers_size 84k;

With this setup, the server is running a little bit better, CPU is at 0%, memory at 4% and load around 9%. I noticed load is high because there are still some requests that don't fit into the buffers and are being written to disk (I/O wait is around 0.8%). Actually, after one week with this configuration running the proxy_temp_path directory has grown up to 30gb of disk usage. If I disable buffering the load goes down to 1%.

I also checked Nginx stats and this SSL server has around 2500 active connections per minute and it is getting around 15000 requests per minute.

The history on the server A is totally different, it is busier than the SSL one, it currently has buffering disabled and it's getting around 29K requests per minute with 6500 active connections. CPU is at 2%, Memory is at 5% and load average is 0.8% with no I/O wait.

If I enable buffering on this host with the same aforementioned configuration the server begins to have performance problems; in less than two hours memory utilization gets to 100% and load average spikes up to 50 ~ 60%.

I've been trying different setups like fewer buffers of bigger size or more buffers with a bigger size and I am not able to find a proper solution to make this work.

I'd really appreciate any advice or recommendation you can guys can give me, I'd like to know what would be the buffer size recommended based on the stats and hardware specs I mentioned above.

Thanks,

health check URI type rest (json) (no replies)

$
0
0
Good afternoon.
I currently have a server configured with balancing via / locale and / upstream.
In / upstream I have a file declared for each backend group and in / locale my referenced operations groups.
It works perfectly but what I don't get is to do a "health check" at the "uri" level because the services I'm balancing are of the "rest" type. In the docu it only allows (or I have only found) to do a GET when what I want to do a POST adding my data afterwards.


Another thing that I can't do is get the "add_header" directive to work so that the server that attended the request is sent in the header

BestRegards

Reverse proxy + Blynk (no replies)

$
0
0
Hello,
I use an nginx reverse proxy and I need to create a rule to allow communication with the IoT server Blynk. The website works smoothly, but Android applications do not. I read somewhere that it does not use the http protocol to communicate with the server and that the problem could be there. This is my very simple proxy rule:
"
server {
listen 443;
listen [::]: 443;

server_name blynk. *;

location / {
proxy_pass https://192.168.1.10:9443/;
}
}
"
Would you please tell me how to set this up for the application to work?

Nginx losing about 50% of requests (no replies)

$
0
0
Hi all.
We got server A that redirects requests to server B.
Server B has primitive nginx configuration like:

server {
#proxy_connect_timeout 600;
#proxy_send_timeout 600;
#proxy_read_timeout 600;
send_timeout 600;
#gzip on;
#gzip_disable "msie6";
#gzip_types text/plain application/json;

listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;

ssl on;
ssl_certificate /etc/nginx/ssl/somessl_com.crt;
ssl_certificate_key /etc/nginx/ssl/somessl_com.key;

server_name serverB.com;

access_log /var/log/nginx/serverB.access.log;
error_log /var/log/nginx/serverB.error.log warn;

location = /favicon.ico {
access_log off;
log_not_found off;
}

location ^~ /here-it-redirects/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://127.0.0.1:3301/;
proxy_redirect off;
}

location /.well-known/pki-validation/905AA44F6FD269FBCF9811238D715250.txt {
alias /etc/nginx/ssl/905AA44F6FD269FBCF9811238D715250.txt;
}
}

We have statistics on the server A that shows a number of redirections twice bigger than we have incoming requests at server B.
What's the issue it can be?
Thank you

NGINX proxy to Ingress Controller with Client Certificate Authentication (2 replies)

$
0
0
On my production env, I have a configuration with two Nginx and the communication between the two servers is secured with this config like this: https://docs.nginx.com/nginx/admin-guide/security-controls/securing-http-traffic-upstream/

INTERNET ---> NGINX reverse proxy ---TLS authentication---> NGINX upstream ---> Application

The conf work as expected, the upstream accept requests only by the trusted certificated.

But I need to migrate the upstream server from a bare metal server to a Kubernetes cluster on Azure Kubernetes Service. So, the conf on the server acting as a reverse proxy is unchanged and I migrate the config of the upstream to NGINX Ingress Controller.

I've deployed this image of the Ingress Controller: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0

And configured the resource as follow:

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-backend
namespace: dev
annotations:
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
nginx.ingress.kubernetes.io/auth-tls-secret: "dev/my-cert"
nginx.ingress.kubernetes.io/auth-tls-error-page: "https://google.com"
spec:
tls:
- hosts:
- my-backend.my-domain
secretName: my-cert
rules:
- host: my-backend.my-domain
http:
paths:
- path: /
backend:
serviceName: my-backend-service
servicePort: http

And the secret called "my-cert" includes: ca.crt tls.crt tls.key imported from the NGINX upstream. https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#client-certificate-authentication

Config in the NGINX reverse proxy unchanged:

location / {
set $upstream my-upstream;
proxy_pass https://$upstream$request_uri;
proxy_set_header Host my-backend.my-domain;
proxy_set_header X-Request-ID $request_id;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_ssl_certificate /etc/nginx/ssl/client.pem;
proxy_ssl_certificate_key /etc/nginx/ssl/client.key;
proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
proxy_ssl_ciphers HIGH:!aNULL:!MD5;
proxy_ssl_trusted_certificate /etc/nginx/CA/CA.pem;
proxy_ssl_session_reuse on;
}

The first attempt to try the config was a curl request from the reverse proxy to the Ingress Controller passing the client certificate:

curl --cacert /etc/nginx/CA/CA.pem --key /etc/nginx/ssl/client.key --cert /etc/nginx/ssl/client.pem https://my-backend.my-domain/health
{"status":"UP"}

It works!

But trying to send a request through the NGINX reverse proxy I'm redirected to google.com page, as configured in the Ingress Controller. This is note the expected behaviour because, so the NGINX reverse proxy is not able to authenticate to the Nginx ingress controller.

Someone can help me to fix the config and make the auth working?

Logging to AWS Cloudwatch from Nginx Container (no replies)

$
0
0
According to documentation for Nginx container "By default, the NGINX image is configured to send NGINX access log and error log to the Docker log collector". When we configure the logs to show up in AWS Cloudwatch it looks like both the error log and Access logs are combined in the same log group. Has any one found a way to separate each go to a different log group?

Limiting connections based on subnet (1 reply)

$
0
0
Hi

I am new to NGINX and want to know if this is even feasible.

We have a server running a warehouse application where users can connect to a webpage on port 9060 or connect with an RF gun on port 9380

This warehouse application has multiple warehouses setup within it but each warehouse within the application will only be used by people from a certain subnet.

Is it possible to limit the number of connections to the server from a specific subnet regardless of whether they are coming in via telnet or to a webpage on port 9060.

We purchase licenses for each warehouse and need to limit the total number of connections to the number of licenses which is why I ask.

Logging client IP on invalid request (no replies)

$
0
0
I'm running nginx behind a proxy and use the real IP module along with the X-Forwarded-For header to log the real client address in access.log. This works fine for normal requests, but fails with an invalid one.

I enabled connection debugging to find this:

2020/05/06 10:21:58 [info] 1181#1181: *53597678 client sent invalid request while reading client request line, client: <proxy address>, server: _, request: "GET <a broken request> HTTP/1.1"

I couldn't find any headers in the debug log for that request, so the logging probably happens too soon prior to headers being parsed in the case of an invalid request?

The invalid requests are not the problem. They come from a tester that sends all manner of things. I'd like to have the real client address in the log in these cases too. Is there anything that can be done?

What directory & file permissions do you recommend? (no replies)

$
0
0
What directory and file permissions do you recommend for a basic website? At some point, I will be adding additional websites as subdirectories of the /etc/nginx/html path.

I'm installing Nginx 1.18.0 from source of a CentOS server.

My configure looks like:

./configure --prefix=/etc/nginx --modules-path=/etc/nginx/modules --user=nginx --group=nginx --with-http_ssl_module --with-http_stub_status_module --with-pcre --with-http_gunzip_module --with-http_gzip_static_module

After compilation, I run this command:
useradd --system --home /var/cache/nginx --shell /sbin/nologin --comment "nginx user" --user-group nginx

Note: I'm not sure if the command is correct as I don't have a /var/cache/nginx directory (?) and I'm not sure if the rest of the command syntax is correct?

Thank you,
Ed

error dynamic module nginx 1.10.3 ngx_http_headers_more_filter_module (no replies)

$
0
0
Hi all)

add dynamic module nginx 1.10.3 ngx_http_headers_more_filter_module

#nginx -V

nginx version: nginx/1.10.3
built by gcc 4.4.7 20120313 (Red Hat 4.4.7-23) (GCC)
built with OpenSSL 1.0.1e-fips 11 Feb 2013
TLS SNI support enabled
configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx --user=nginx --group=nginx --with-file-aio --with-ipv6 --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_geoip_module=dynamic --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --with-http_perl_module=dynamic --with-mail=dynamic --with-mail_ssl_module --with-pcre --with-pcre-jit --with-stream=dynamic --with-stream_ssl_module --with-debug --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-ld-opt=' -Wl,-E'


#wget 'http://nginx.org/download/nginx-1.10.3.tar.gz'

#git clone https://github.com/openresty/headers-more-nginx-module.git

>conf module

ngx_addon_name=ngx_http_headers_more_filter_module

HEADERS_MORE_SRCS=" \
$ngx_addon_dir/src/ngx_http_headers_more_filter_module.c \
$ngx_addon_dir/src/ngx_http_headers_more_headers_out.c \
$ngx_addon_dir/src/ngx_http_headers_more_headers_in.c \
$ngx_addon_dir/src/ngx_http_headers_more_util.c \
"

HEADERS_MORE_DEPS=" \
$ngx_addon_dir/src/ddebug.h \
$ngx_addon_dir/src/ngx_http_headers_more_filter_module.h \
$ngx_addon_dir/src/ngx_http_headers_more_headers_in.h \
$ngx_addon_dir/src/ngx_http_headers_more_headers_out.h \
$ngx_addon_dir/src/ngx_http_headers_more_headers_in.h \
$ngx_addon_dir/src/ngx_http_headers_more_util.h \
"

if test -n "$ngx_module_link"; then
ngx_module_type=HTTP_AUX_FILTER
ngx_module_name=$ngx_addon_name
ngx_module_incs=
ngx_module_deps="$HEADERS_MORE_DEPS"
ngx_module_srcs="$HEADERS_MORE_SRCS"
ngx_module_libs=

. auto/module
else
HTTP_AUX_FILTER_MODULES="$HTTP_AUX_FILTER_MODULES $ngx_addon_name"
NGX_ADDON_SRCS="$NGX_ADDON_SRCS $HEADERS_MORE_SRCS"
NGX_ADDON_DEPS="$NGX_ADDON_DEPS $HEADERS_MORE_DEPS"
fi


cd /tmp/nginx-1.10.3/
./configure --prefix=/tmp/nginx-1.10.3 \
--add-dynamic-module=/tmp/headers-more-nginx-module
make modules

>compile

Configuration summary
+ using system PCRE library
+ OpenSSL library is not used
+ using builtin md5 code
+ sha1 library is not found
+ using system zlib library

nginx path prefix: "/tmp/nginx-1.10.3"
nginx binary file: "/tmp/nginx-1.10.3/sbin/nginx"
nginx modules path: "/tmp/nginx-1.10.3/modules"
nginx configuration prefix: "/tmp/nginx-1.10.3/conf"
nginx configuration file: "/tmp/nginx-1.10.3/conf/nginx.conf"
nginx pid file: "/tmp/nginx-1.10.3/logs/nginx.pid"
nginx error log file: "/tmp/nginx-1.10.3/logs/error.log"
nginx http access log file: "/tmp/nginx-1.10.3/logs/access.log"
nginx http client request body temporary files: "client_body_temp"
nginx http proxy temporary files: "proxy_temp"
nginx http fastcgi temporary files: "fastcgi_temp"
nginx http uwsgi temporary files: "uwsgi_temp"
nginx http scgi temporary files: "scgi_temp"

#make modules

make -f objs/Makefile modules
make[1]: Entering directory `/tmp/nginx-1.10.3'
cc -c -fPIC -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I src/h ttp -I src/http/modules \
-o objs/addon/src/ngx_http_headers_more_filter_module.o \
/tmp/headers-more-nginx-module/src/ngx_http_headers_more_filter_module.c
cc -c -fPIC -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I src/h ttp -I src/http/modules \
-o objs/addon/src/ngx_http_headers_more_headers_out.o \
/tmp/headers-more-nginx-module/src/ngx_http_headers_more_headers_out.c
cc -c -fPIC -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I src/h ttp -I src/http/modules \
-o objs/addon/src/ngx_http_headers_more_headers_in.o \
/tmp/headers-more-nginx-module/src/ngx_http_headers_more_headers_in.c
cc -c -fPIC -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I src/h ttp -I src/http/modules \
-o objs/addon/src/ngx_http_headers_more_util.o \
/tmp/headers-more-nginx-module/src/ngx_http_headers_more_util.c
cc -c -fPIC -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I src/h ttp -I src/http/modules \
-o objs/ngx_http_headers_more_filter_module_modules.o \
objs/ngx_http_headers_more_filter_module_modules.c
cc -o objs/ngx_http_headers_more_filter_module.so \
objs/addon/src/ngx_http_headers_more_filter_module.o \
objs/addon/src/ngx_http_headers_more_headers_out.o \
objs/addon/src/ngx_http_headers_more_headers_in.o \
objs/addon/src/ngx_http_headers_more_util.o \
objs/ngx_http_headers_more_filter_module_modules.o \
-shared
make[1]: Leaving directory `/tmp/nginx-1.10.3'


add nginx.conf

load_module /ngx_http_headers_more_filter_module.so;

nginx -t

error

nginx: [emerg] module "/usr/lib64/nginx/modules/ngx_http_headers_more_filter_module.so" is not binary compatible in /etc/nginx/nginx.conf:12
nginx: configuration file /etc/nginx/nginx.conf test failed

reverse proxy websockets (no replies)

$
0
0
Hi All,

I hope you can help me. I have multiple sites and in front of this i have an nginx revers proxy to handle all the requests. Now with every site that is using websockets its not working for me from my company proxy. if i access my websites elsewhere it works. However if i access another websocket url (from mattermost for example) it works. If i enter my mattermost website again i get a websocket error.

error during websocket handshakeL unexpected response code: 400

How can this be resolved.??

NGINX Revers Proxy over IPSEC (no replies)

$
0
0
Hey People
I have configured NGINX as revers proxy on my PFSense Box
Next I added IPSEC tunel connecting to my network.
Now I'm forwarding http and https using Nginx revers proxy through IPSEC tunnel back to my servers. The problem I have is that on the back end server I do not see end client IP instead of I see IP address of IPSec Tunel of my PFSense box
Under Nginx config I have tested following but result is still that same no client IP
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;-||-|=left aligned paragraph=|

Does anyone know if possible to fix that?

Thanks
Raf

Reverse Proxy to Docker Container (on another host) (no replies)

$
0
0
I have a reverse proxy setup that I know works fine. Already SSL, good cert, whole shindig. On another host, I have a webserver (Ombi) that I want to run the alpha for, which is only available in a docker image. I can access the webserver just fine without the reverse proxy, but I need to be able to use SSL (not available for the webserver currently) and not take up public IPs. When I try to connect example.com/test, I get nothing but the Title element (Tab renamed) and a blank white screen. Do I need to change something with docker's DNS or something in my proxy config to allow this to work?


server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
return 301 https://example.com$request_uri;
}

server {
listen 443 ssl http2;
listen [::]:443 ssl http2;

root /var/www/example.com;

index index.html index.htm index.nginx-debian.html;

server_name example.com;

location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}

location /test {
proxy_pass http://1.1.1.2:3579;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 90;
proxy_redirect http://1.1.1.2:3579 https://$host;
}

ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
}
Viewing all 2931 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>