Quantcast
Channel: Nginx Forum - How to...
Viewing all 2931 articles
Browse latest View live

Problem with ssl_session_ticket_key and ssl_session_cache (no replies)

$
0
0
I am currently doing my thesis on Origin Confusion and thus i am playing around to see how this can occur. Currently i am trying to share a cacher id session id's between all virtual hosts regardless of the IP/Interface they server but this was unsuccessful. Similarly I Want to define one ticket.key to encrypt all the session tickets regardless of the virtual hosts but only the first two hosts in the config below are using it. On the other hand if i remove the 192.168.50.12:/192.168.50.15 from all the virtual hosts then the ticket key is used over all virtual hosts.

I have the following configuration:

resolver 192.168.1.11;
ssl on;
ssl_session_timeout 5m;
ssl_session_tickets on;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP;
ssl_prefer_server_ciphers on;
ssl_session_ticket_key /etc/nginx/ssl/ticket.key
ssl_session_cache shared:SSL:20m;

server {
listen 192.168.50.12:443 ssl;
server_name www.page1.com cdn1.page1.com;

root /usr/share/nginx/www/page1.com;
index index.html index.htm;

ssl_certificate /etc/nginx/ssl/*.page1.com.cert.pem;
ssl_password_file /etc/nginx/ssl/pass;
ssl_certificate_key /etc/nginx/ssl/*.page1.com.key.pem;
ssl_trusted_certificate /etc/nginx/ssl/ca-chain-um-thesis.cert.pem;

location / {
try_files $uri $uri/ /index.html;
}

location /some/path {
proxy_set_header Host $arg_page;
proxy_pass_header Set-Cookie;
proxy_pass_header P3P;
proxy_pass $arg_prot://$arg_page$arg_path;
}

}

server {
listen 192.168.50.12:443 ssl;
server_name www.page2.com;

root /usr/share/nginx/www/page2.com;
index index.html index.htm;

ssl_certificate /etc/nginx/ssl/www.page2.com.cert.pem;
ssl_password_file /etc/nginx/ssl/pass;
ssl_certificate_key /etc/nginx/ssl/www.page2.com.key.pem;
ssl_trusted_certificate /etc/nginx/ssl/ca-chain-um-thesis.cert.pem;

location / {
try_files $uri $uri/ /index.html;
}

location /some/path {
proxy_set_header Host $arg_page;
proxy_pass_header Set-Cookie;
proxy_pass_header P3P;
proxy_pass $arg_prot://$arg_page$arg_path;
}
}

listen 192.168.50.15:443 ssl;
server_name www.pagna3.com;

root /usr/share/nginx/www/page3.com;
index index.html index.htm;

ssl_certificate /etc/nginx/ssl/www.pagna3.com.cert.pem;
ssl_password_file /etc/nginx/ssl/pass;
ssl_certificate_key /etc/nginx/ssl/www.pagna3.com.key.pem;
ssl_trusted_certificate /etc/nginx/ssl/ca-chain-um-thesis.cert.pem;

location / {
try_files $uri $uri/ /index.html;
}

location /some/path {
proxy_set_header Host $arg_page;
proxy_pass_header Set-Cookie;
proxy_pass_header P3P;
proxy_pass $arg_prot://$arg_page$arg_path;
}
}

server {
listen 192.168.50.15:443 ssl;
server_name www.pagna4.com;

root /usr/share/nginx/www/page4.com;
index index.html index.htm;

ssl_certificate /etc/nginx/ssl/www.pagna4.com.cert.pem;
ssl_password_file /etc/nginx/ssl/pass;
ssl_certificate_key /etc/nginx/ssl/www.pagna4.com.key.pem;
ssl_trusted_certificate /etc/nginx/ssl/ca-chain-um-thesis.cert.pem;

location / {
try_files $uri $uri/ /index.html;
}

location /some/path {
proxy_set_header Host $arg_page;
proxy_pass_header Set-Cookie;
proxy_pass_header P3P;
proxy_pass $arg_prot://$arg_page$arg_path;
}

}

for some reason the ssl_session_ticket_key is only being applied to the virtual hots with listen 192.168.50.12 similarly the virtual hosts are not sharing the same cache as defined by ssl_session_cache at the top.

Am i doing sorting wrong or this is by design?

Mutual authentication for SSL termination for TCP Upstream (1 reply)

$
0
0
Hi All,

The particular feature I am interested is SSL termination for TCP Upstream.

We have an application which accepts messages (TCP) over TLS. With NGINX, I want to do the following:

1. Terminate TLS at NGINX and then NGINX will forward the decrypted packets to the application.

2. There should be mutual authentication between NGINX and the client (for the application). I am finding out documentation which talks about server side authentication (client verifying server's certificate), but I am not able to find out the steps to configure mutual authentication (both client and server verifying each other's certificates). Any suggestions?

https://www.nginx.com/resources/admin-guide/nginx-tcp-ssl-termination/

3. Also is it possible to allow (SSL handshake) only if client has a specific identity. Is it possible to implement in NGINX or NGINX Plus?

Thanks,
Arnab

How do i enable http2 module ? (3 replies)

$
0
0
Hello, noob here.

How do i enable "http2" module in nginx ?
Console gives me "the "http2" parameter requires ngx_http_v2_module"
And i don't have "--with-http_v2_module" if i do "nginx -V"

So, how do i include, add these module to my nginx ?

Running under windows, downloaded latest ( 1.9.12 ) version for Win..

Use nginx lua extension with custom lua in C-modules (1 reply)

$
0
0
Hi,

How to use custom Lua modules written in C in Nginx?

1) It is clear how to extend Lua with C: http://www.troubleshooters.com/codecorn/lua/lua_lua_calls_c.htm
Tried test samples -> all work good

2) It is clear how to use Lua modules with nginx written in Lua: https://github.com/openresty/lua-nginx-module#statically-linking-pure-lua-modules
Tried test samples -> all work good

But modules written in C and compiled as ".o" or ".so" object are not working.

1) I configure nginx with --with-ld-opt="-Wl,-rpath,/path/to/luajit-or-lua/lib,/<path>/mylua.o"
Content of "mylua.c"
--------------------------------------------------
#include "lua.h"
#include "lualib.h"
#include "lauxlib.h"

typedef int (*lua_CFunction) (lua_State *L);

lua_State* L;

static int test(lua_State *L)
{
int x = 777;
lua_pushnumber(L, x);
return 1;
}

int luaopen_mylua(lua_State *L)
{
lua_register(L, "test", test);
return 0;
}
--------------------------------------------------

2) This module works if use it in lua, like:
lua> require("mylua");
print(test()); -- Prints "777"

3) Nginx is compiling correctly

4) When I use in nginx config:
content_by_lua_file '/opt/nginx/test.lua';

Content of /opt/nginx/test.lua
local foo = require("mylua");
ngx.say(foo.test());

I'm receiving the error:
[error] 9375#0: *1 lua entry thread aborted: runtime error: /opt/nginx/test.lua:1: module 'mylua' not found:
no field package.preload['mylua']

The names of modules and files are 100% correct. Checked 10x times.
I suppose that lua C-module must be compiled with "luajit".

Thanks in advance

Displaying multiple files in folder (no replies)

$
0
0
I have a folder that contains multiple files that need to be accessed and displayed. I could setup a location {} for each easily enough, but this folder can have files added and deleted by a program that creates the files. I need an example of how to setup a location {} that will access the specific needed file when a link is clicked on within a list page. Here is an example of such a page: http://www.noreply.org/echolot/thesaurus/

Thanks

nginx deny allow with http inside https encapsulation (no replies)

$
0
0
Hi all

We have machine with nginx running on port 80 behind a hardware load balancer which has the ssl binded. Hence https is in load balancer and requests are transfered to nginx on port 80. No access without https.

Nginx is configured with logs with loadbalancer format which ensures source ip is also transfered to nginx and same is getting logged in logs.

We need to apply deny for certain location for certain ip along with allow for rest of the world. But with the following we are unable to achieve the goal. The configuration is as follows,

conf file
location /path/tobloack/ {
deny 1.2.3.4;
deny 5.6.7.8;
allow all;
}

location /path/toblock/ {
deny 1.2.3.4;
deny 5.6.7.8;
allow all;
}

nginx listens on port 80 only.
nginx version: nginx/1.9.2

when we access the page with 443 / https its not blocking the access.
ie : https://webaddress.com/path/toblock/ is not denied when accessing from ip 1.2.3.4.

Can some one help me where we are wrong.

Thanks
Raj

Custom log method with perl (no replies)

$
0
0
Is there any solution in perl like log_by_lua? https://github.com/openresty/lua-nginx-module#log_by_lua_block

Or any method i can catch the sent byte data in perl?

Curently we have a system that parses the log files to put sent bytes data to redis but it is far from the ideal solution.

Modify cached content (no replies)

$
0
0
Hi,

any chance to modify cached content on the fly?
I want to display a messag on cached sites if the backend isn't available anymore.

.dff

Multiple node.js on same server (no replies)

$
0
0
Hi there. I'm a newbie with nginx and I'm trying to host two node.js apps (my own application on port 5000 and a mongo db manager on port 1234). I have created the CNAME entries on my DNS server to point to the same host (app.mydomain.com and mongo.mydomain.com). The application also uses https so I want to redirect http requests to https.

My configuration looks like this:

server {
listen 443;
server_name app.mydomain.com;

ssl on;
ssl_certificate /etc/nginx/ssl/app.mydomain.com.crt;
ssl_certificate_key /etc/nginx/ssl/app.mydomain.com.key;

location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}

server {
listen 80;
server_name mongo.mydomain.com;

location / {
proxy_pass http://localhost:1234;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Conenction 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}

server {
listen 80 default_server;
server_name app.mydomain.com;
return 301 https://$server_name$request_uri;
}

But when I browse to http://mongo.mydomain.com, I get a redirection to https://app.mydomain.com

What am I doing wrong?
Thank you in advance!

Load Balance Two Sockets with Different Roots (no replies)

$
0
0
I am using Unicorn as the application server in front of my web application. I want to essentially do blue/green testing by having two versions of the site running at the same time. The problem is that the root path is different from one version of the site to the other. Is there a way to do this?

This is my config. Right now it doesn't work because when the second server comes up in round robin the files are broken because they aren't at the root path.

upstream unicorn {
server unix:/tmp/unicorn.main.sock fail_timeout=0;
server unix:/tmp/unicorn.main_staging.sock fail_timeout=0;
}

server {
listen 80;
server_name mysite.com;
root /var/www/sites/main/current/public;

try_files $uri/index.html $uri @unicorn;

location @unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn;
}

location ~ ^/assets/ {
expires 1y;
add_header Cache-Control public;

add_header ETag "";
break;
}

error_page 500 502 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}

Laravel & Opencart Config (no replies)

$
0
0
I'm not a network admin (as you'll quickly realise) but have set up a test environment of a new customer's site on a VPS. For reasons beyond my control this is an Opencart site with a bit of Laravel 4 in it as well. It currently runs on an Apache server but I would like to get it working in this test environment using Nginx instead, before I start working in it. I have managed to set the Nginx config to work with Opencart and have got Nginx to work on Laravel only sites before, but combing both has proven beyond me.

The Opencart site works fine but when I go to a Laravel URL (laravel/public) PHP doesn't serve the pages. Instead they get treated as a download. I was wondering if anyone could have a look and give me an idea of what I'm doing wrong.

Thanks

Here's what the config looks like (i've changed the URLs to protect the innocent!).

server {
listen [::]:80;
listen 80;
server_name www.blah.blah.com;
return 301 $scheme://blah.blah.com$request_uri;
}

server {
listen [::]:80;
listen 80;

server_name blah.blah.com;
root /usr/share/nginx/blah.blah.com/html;
charset utf-8;

location / {
try_files $uri @opencart;
server_tokens off;
}

location @opencart {
rewrite ^/(.+)$ /index.php?_route_=$1 last;
}

location ^~ /laravel {
try_files $uri $uri/ /laravel/public/index.php$query_string;
}

rewrite ^/sitemap.xml$ /index.php?route=feed/google_sitemap last;
rewrite ^/googlebase.xml$ /index.php?route=feed/google_base last;
rewrite ^/download/(.*) /index.php?route=error/not_found last;
rewrite /admin$ $scheme://$host$uri/ permanent;

location ~ \.php$ {
fastcgi_pass 127.0.0.1:9001;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include /etc/nginx/fastcgi_params;
}

location /admin {
index index.php;
}

include h5bp/basic.conf;
}

Reverse proxy is not working after configuration (no replies)

$
0
0
I am using a Linux AMI Amazon instance on which I have installed nginx following this tutorial : https://castix.wordpress.com/2012/09/23/web-hosting-installing-nginx-as-a-reverse-proxy/

My reverse proxy doesnt seem to work. I havent done any DNS configurations since it is the first time I am using all the technologies and I dont know how. Can someone give me some help for any configuration regarding the DNS?

Also my conf.d directory contains only a virtual.conf file and not what in every other tutorial is presented. there is only one default configuration file which is in /etc/nginx and its name is nginx.conf (should I use this file or keep the configuration under conf.d)?

LBActive active dynamic load balancing (no replies)

$
0
0
hey all,

I just finished up the first working version of my active dynamic load balancing enabler. It's intended for systems that could work as an active (active backend health checks) dynamic (dynamically adjust weighting) LB but are missing those features.

I chose NginX as the first application due to the fact that we have a small LB cluster of them already in production. I will be moving the NginX specific code to its own module eventually to enable the addition of even more modules for other applications.

I am currently looking for anyone willing to help test the project. You can find it here. https://sourceforge.net/projects/lbactive/?source=navbar

Have fun,
Chuck

Client certificate based AUTH (ssl_client_certificate vs ssl_trusted_certificate) (no replies)

$
0
0
Hello,
I have client certificate based authorization defined in nginx as follows:


ssl_client_certificate /path/to/MY_CA_ROOT.pem;
ssl_verify_client optional;
ssl_verify_depth 2;

#special location - admin only access (with client cert signed by CA)
location /myApp/admin/{
if ($ssl_client_verify != SUCCESS) { return 403; }
}

Now, according to official documentation: http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_client_certificate

I wanted to change ssl_client_certificate in favor of ssl_trusted_certificate.

But when I do so:
instead of:

#ssl_client_certificate /path/to/MY_CA_ROOT.pem;
I set:
ssl_trusted_certificate /path/to/MY_CA_ROOT.pem;



Nginx complains:

nginx: [emerg] no ssl_client_certificate for ssl_client_verify


Are those two directives compatible or am I getting in wrong?

DOC says:
Specifies a file with trusted CA certificates in the PEM format used to verify client certificates and OCSP responses if ssl_stapling is enabled.

In contrast to the certificate set by ssl_client_certificate, the list of these certificates will not be sent to clients.

Nginx allow access to location which falls under regex deny (no replies)

$
0
0
Hi everyone! I have a problem with my nginx configuration. I have a deny rule in the root location like this

location ~* "/([a-z]{2})/lib/"
but I want to add some exclude for this rule, I need access to this location
.../js/lib/
because json scripts don't work. Could please some one help me with my issue?

Help configuring headers to generate a proper Kerberos Token for a backend behind Nginx (no replies)

$
0
0
Hello,

I'm using a backend authentication which provides SPNEGO Authentication behind a Nginx proxy.
It seems that the client SPNEGO token is generated from the hostname, so my users cannot use the spnego negotiation correctly with this configuration:

server {
listen 443;
location / {
satisfy all;
auth_request /auth;
auth_request_set $saved_www_authenticate $upstream_http_www_authenticate;
add_header WWW-Authenticate $saved_www_authenticate;
proxy_pass https://myprotectedsite;
}
location /auth {
internal;
proxy_pass http://myauthbackend;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;

}
}


The backend is correctly getting the headers WWW-Authenticate with the token but obviously it is not proper and do not pass the authentication. Is there a way to modifying the right headers to instruct the client that the backend is not the proxy but itself ?
Best regards

Point to folder to page (2 replies)

$
0
0
I have the follow code that accesses my pages correctly:

location /echolot {
alias /home/webpage/www/echolot;
autoindex on;
}

When I go to allpingers.net/echolot/, it goes to an 'Index of /echolot/' page. Where I want allpingers.net/echolot/ to actually go to is https://allpingers.net/echolot/echolot.html. I can't simlink /echolot/ to index.html because I need the index page to remain where goes now. Does anyone know if I can somehow point allpingers.net/echolot/ to allpingers.net/echolot/echolot.html?

Thanks.

discard duplicate http request with nginx ? (no replies)

$
0
0
Hello
I have been using nginx for a year now and without a doubt it has been very useful as a proxy server for distributing my static content.
now , I have a problem with an application which is sending the same http request POST twice to another application causing an error , it is obviously a bug that need fixing , but I was wondering if, while the bug is being fixed , if i can use nginx to drop the second http request , this particular http request should never be the same payload at any time so there is no need to put a time restrain on it .
Any suggestion would be a great help

Regards

Luc

Hotlinking (no replies)

$
0
0
Hey guys,

I am running on one of my servers nginx, for streaming/hosting mp4 files. How can I deactivate hotlinking?

nginx as proxy to my mysql server in docker (no replies)

$
0
0
I'm working with a server in the cloud with N dockers containers, in the server I have one nginx that redirect based on the domain to my docker containers, I can perfectly redirect my http traffic, but I'm getting troubles with when it is mysql since it uses a protocol different to http, I tried some solutions like below but didn't work, can someone give me one light where is my mistake?

upstream mysql {
server 127.0.0.1:1401;
}

server {

listen 80;

server_name mydomain.com.br www.mydomain.com.br;

location / {
proxy_pass http://127.0.0.1:1400;
}
}

server {

listen 3306;

server_name mydomain.com.br www.mydomain.com.br;

location / {
proxy_pass mysql;
}
}

I already saw this page (https://www.nginx.com/resources/admin-guide/proxy-protocol/), but did not work when i try to define the server_name / domain:

stream {
server {
listen 12345;
proxy_pass example.com:12345;
proxy_protocol on;
}
}
Viewing all 2931 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>