Quantcast
Channel: Nginx Forum - How to...
Viewing all 2931 articles
Browse latest View live

Complex Proxying (NAT) with NGINX (no replies)

$
0
0
Hi everyone,

I have a challenge I need to solve with NGINX. I have tried just about everything and i cant get this to fully work.

I have a front end facing NGINX server which has several sites hosted.
I need one site to proxy to another back-end server that is not on the same network but over MPLS networks somewhere and load that page in a "https://www.example.com/hiddensite/" Basically, that other back-end server is a full website with DB and reports. The back end server is running in a non encrypted HTTP protocol as its in-house and was build that way. Outsiders do not have access directly to the back-end server I'm trying to NAT Proxy with NGINX.

I have gotten the site homepage file loaded with NGINX proxy, but user cannot click links and browse the site at all.

Please help me if anyone know how to get this done properly.

[internet]---[firewall]---[nginx server]---[lan1]---[MPLS]---[lan2]---[hiddensite server]

NGINX not starting git-http-server (no replies)

$
0
0
I'm working on a Git server behind Nginx using the git git-http-backend script.

Currently I have a passenger server that's working serving a rails app at port 2222. However, behind the /git/ folder I want to serve git repositories. The thing is, Nginx doesn't seem to start the script. Whether I use a socket file or a different localhost post, I get a 502 error. This is showing in the Nginx Error Log:

2017/06/07 18:43:03 [error] 2147#0: *3 no live upstreams while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /git/me/repo HTTP/1.1", upstream: "fastcgi://localhost", host: "localhost:2222"

It seems nginx is not starting the process to handle the git files.

This is my location part of the nginx setup:

location ~ /git(/.*) {
include fastcgi.conf;
include fastcgi_params;
#fastcgi_pass 127.0.0.1:8888;
fastcgi_param SCRIPT_FILENAME /Library/Developer/CommandLineTools/usr/libexec/git-core/git-http-backend;
# export all repositories under GIT_PROJECT_ROOT
fastcgi_param GIT_HTTP_EXPORT_ALL "";
fastcgi_param GIT_PROJECT_ROOT /Users/userx/Documents/Projecten/repositories;
fastcgi_param PATH_INFO $1;


fastcgi_keep_conn on;
fastcgi_connect_timeout 20s;
fastcgi_send_timeout 60s;
fastcgi_read_timeout 60s;
#fastcgi_pass 127.0.0.1:9001;
fastcgi_param REMOTE_USER $remote_user;
#fastcgi_pass unix:/var/run/fcgi/fcgiwrap.socket;
fastcgi_pass localhost:9001;

}

I can't figure it out alone, can anybody share their thoughts on this?

microcache pread() bad file descriptor error (no replies)

$
0
0
Hello,

I'm getting the error in regards to multiple files on a server, and was wondering if anyone had any ideas what could be causing it/how to fix. It's obviously enabled to the microcache due to the location it's complaining about, and turning off caching fixes it, but after that point i'm stumped.Running nginx/1.10.0 (Ubuntu) on ubuntu 16.04
2017/06/07 14:08:54 [crit] 32493#32493: *5961 pread() "/var/cache/CACHENAME/6/dc" failed (9: Bad file descriptor), client: IP , server: SITENAME.dev, request: "GET /themes/basic/js/build/loadIn.js?v=1.x HTTP/1.1", host: "SITENAME.dev", referrer: "http://SITENAME.dev/"


Example of the microcache config from the server;

gzip off;
# Setup var defaults
set $no_cache "";
# If non GET/HEAD, don't cache & mark user as uncacheable for 1 second via cookie
if ($request_method !~ ^(GET|HEAD)$) {
set $no_cache "1";
}
# Drop no cache cookie if need be
# (for some reason, add_header fails if included in prior if-block)
if ($no_cache = "1") {
add_header Set-Cookie "_mcnc=1; Max-Age=2; Path=/";
add_header X-Microcachable "0";
}
# Bypass cache if no-cache cookie is set
if ($http_cookie ~* "_mcnc") {
set $no_cache "1";
}
# Bypass cache if flag is set
proxy_no_cache $no_cache;
proxy_cache_bypass $no_cache;
# Set cache zone
proxy_cache CACHENAME;
# Set cache key to include identifying components
proxy_cache_key $scheme$host$request_method$request_uri;
# Only cache valid HTTP 200 responses for 1 second
proxy_cache_valid 200 1s;
# Serve from cache if currently refreshing
proxy_cache_use_stale updating;

proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
proxy_set_header X-Forwarded-Port 80;
proxy_set_header Host $host;
}

Nginx is not picking error_page (1 reply)

$
0
0
Hi Team,

I may need your help on this matter.I am trying to implement a redirection based on 405 error page.But Nginx has not executed my redirection.
Please help me to fix this.I have stuck on this matter from last 2 days.

location / {
resolver 10.79.157.2 valid=30s;
set $upstream_core "ssoapp.devsso.veri.internal:9443";
proxy_pass https://$upstream_core;
error_page 502 /DEV.502.nginx.html;
error_page 405 /DEV.502.nginx.html;
proxy_read_timeout 1200;
proxy_send_timeout 1200;
proxy_connect_timeout 1200;
proxy_ignore_client_abort on;
proxy_http_version 1.1;
proxy_set_header Host $host;
}


location = /DEV.502.nginx.html {
root /opt/applications/nginx/nginx-verison-1.8.0/html/;
}


location = @app {
return 301 /;
}

Delay after reconnection to NGINX (no replies)

$
0
0
We have NGNIX Server with flowplayer. Folowplayer has configuration to make buffer for 0.5 second. But if I disconnect the NGINX server from the network and reconnect it again I have a big delay on the client more then 10 seconds. Is a way to reduce the delay or force the player to reconnect to the NGINX and to play from the time the NGINX was reconnected? Thanks.

How to send http request in c? (no replies)

$
0
0
I've stuck in this problem for days. I think I should go with ngx_http_connection_t, ngx_http_request_t and related functions. I can't find any useful document for this (Very ironic, an HTTP server with completely no documents about how to send http request). And I read the source code but with no luck. I'm not a decision maker, I can't introduce openresty to our project. Does anyone know how to achieve this?

redirect http to https, but exclude API (no replies)

$
0
0
we switched a site from http to https, but for compatibility reasons we need some API to still be reachable by http.

our current redirect is:

server {
listen 80;
server_name www.mysite.com;

location ~ /.well-known {
root /var/www/html;
allow all;
}

return 301 https://$server_name$request_uri;
}

what do I need to change/add, to make sure POST requests to http://www.mysite.com/api/reportNew are not forwarded to https?
I tried some variants with location and root, but somehow never succeeded.

Submit Query to a particular server behind NGINX (no replies)

$
0
0
I am querying an NGINX server for static file, I believe there are more than one upstream servers behind the NGINX at :80. I say this by the two different 'Server' headers I receive in the response. One server is significantly faster than the other one, is there any way I could direct my request to a particular server from the client side ?
Or is there any way to re-set the proxy_cache_key associated with my requests from the client side ? so that my requests will go to the good server consecutively ?

Any guidance is greatly appreciated.

Thank you

nginx domain not work, redirect all sub domain to main domain (no replies)

$
0
0
I have a problem with my nginx, more specifically I have added several sub domains and the problem is that as it enters the sub domain it redirects to the main domain. My configuration below.

https://pastebin.com/iKsjQ9wX

How to prevent unauthorized domain forwarding with Nginx? (no replies)

$
0
0
Hi,

An unauthorized domain koyblanafuc.cf is forwarding to our domain quackquack.in. We tried to stop this unauthorized domain from rendering our content/pages via nginx,

The solution to the problem worked on http. But doesnt seem to be working https. To Catch non authorized domains on both http and https.

-----
server{
listen 80 default_server;
root /aaa/bbb/www/404;
index index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
}

server{
listen 443 default_server;
root /aaa/bbb/www/404;
index index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
}
----

But I received the error as...

-----
Secure Connection Failed

An error occurred during a connection to quackquack.in. SSL received a record that exceeded the maximum permissible length. Error code: SSL_ERROR_RX_RECORD_TOO_LONG
-----

My Domain Non-SSL V-host:

-----
server {
listen 80;
server_name quackquack.in www.quackquack.in;
server_tokens off;
-----

My Domain SSL V-host:
-----
server {
listen 443;
server_name quackquack.in;

ssl on;
ssl_certificate /myssl_crt_file;
ssl_certificate_key /myssl_key_file;

ssl_protocols SSLv2 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256HE-RSA-AES256-GCM-SHA384HE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHAHE-RSA-AES256-SHA;
ssl_prefer_server_ciphers on;
-----

Any suggestions will greatly help!

How to load balance dns with nginx and lua (no replies)

$
0
0
Hello,
I've two dns that have differents purpose. Two of them listen locally and it's nginx which expose the udp port 53.
Basically one just filter some domain names and the other allow all the traffic.
I 'm choosing between the two dns based on a variable store in a redis database.

I would like to find a wat to script nginx in lua or else to read the redis variable and choose the right dns server to forward the request to.

I saw that I've to use stream directive to listen to the 53 udp port. But I cannot find a proper way to make the lua scripting working.

If you, nginx guru, have a solution to purpose I will be very thankfull.

Here's my no-working code:

stream {
upstream dns1 { server 172.16.0.1:53; }
upstream dns2 { server 172.16.0.2:53; }

server {
listen 53 udp;

# That part not working
set $dns;
content_by_lua_block {

local redis = require "resty.redis"
local red = redis:new()

local ok, err = red:connect("redis-ip", 6379)
if not ok then
nginx.say("failed to connect: ", err)
return
end

local res, err = red:get("The redis var")
if not res then
ngx.var.dns = upstream.get_servers("dns1")
else
ngx.var.dns = upstream.get_servers("dns2")
end

}
proxy_pass $dns
}

}

What different "$is_args$args" and "$query_string" ? (no replies)

$
0
0
Hi guys.

$is_args$args
and
$query_string
is same?

#patter 1
location ~ ^/test/enter1.html {
rewrite (.*) http://another.com/test/list1.html$is_args$args;
}

#patten 2
location ~ ^/test/enter2.html {
rewrite (.*) http://another.com/test/list2.html$query_string;
}

I tried
/test/enter1.html?page=10
/test/enter2.html?page=10

pat.1 -> list1.html?page=10?page=10
pat.2 -> list2.html?page=10

I think same
Am I misunderstanding?

response page based on mod security rule in error logs? (no replies)

$
0
0
Hi, I would like to accomplish the following.

Modsecurity is enabled in NginX and I want when some false postitive hit is opened, some rule blocked some page the response page to be shown with the error code from the modsec rule. For example you have been blocked by mod security rule "XXX" as the rule is get from the logs. I think this eventually can be accomplished using if and map variable in the nginx configuration + some dynamical error page, but I am completely not sure. Can anyone share it's expert advice or experience?

thank you in advance!

Simplesaml configuration (no replies)

$
0
0
Hi, I'm having a little trouble configuring simplesaml with nginx.

My server config is:

location /simplesaml {
alias /var/simplesamlphp/www;

index index.php;

location ~ \.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_index index.php;
fastcgi_pass 127.0.0.1:9000;
include fastcgi_params;
}
}

I get the following error:

*203 FastCGI sent in stderr: "Unable to open primary script: /var/simplesamlphp/www/simplesaml/index.php (No such file or directory)"

The problem is that my url is being included in the alias, for example:

/var/simplesamlphp/www/simplesaml/index.php

Should be:

/var/simplesamlphp/www/index.php

Many thanks
Jonny

can login to site with reason=0 (no replies)

$
0
0
Hello,

we are using nginx version: nginx/1.10.2
centos 7

and we have the weirdies issue with nginx,
we connected upstream to 2 servers
when both work - we see at access log

GET /owa/auth/logon.aspx?url=https%3a%2f%2fmail.domain.local%2fowa%2f&reason=0

and we cannot login and it redirect to the login page all over again but
if at the settings i make upstream to work on 1 site it logs fine!
what could be the issue ?

you can see the commented domain2 at the /etc/nginx/conf.d/default.conf below

conf files here :

/etc/nginx/nginx.conf


user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
worker_connections 1024;
}

http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;

include /etc/nginx/mime.types;
default_type application/octet-stream;

# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;

server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;

# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;

location / {
}

error_page 404 /404.html;
location = /40x.html {
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}



/etc/nginx/conf.d/defult.conf


server {
listen 80;
return 301 https://$host$request_uri;
}

upstream cas {
server domain1:443;
# server domain2:443;
}

server {
listen 443 ssl;
server_name *.blackrock.local;

ssl on;
ssl_certificate /etc/nginx/certs/outlook_rev_proxy.cer;
ssl_certificate_key /etc/nginx/certs/outlook_rev_proxy.key;

location / {
proxy_pass https://cas;
proxy_read_timeout 360;
proxy_pass_header Date;
proxy_pass_header Server;

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}

Help with redirects (no replies)

$
0
0
Good mates,

I tell you the case, I have made the ironing of the content of my website, I have generated some rules to display those files at the same time that I show my website in wordpress.

The rules for each url look like this:

Rewrite ^ / category / url1 / /category/file1/url1/index.php last;
Rewrite ^ / category / url1 / category / file1 / url1 / last;

When I visit the url it is shown as https: // domain / category / url1 /

But if I make the same request without https and without "/" at the end the path is shown: http: //domain/category/url1/index.php

How do I do that if it is accessed by https show the first url and not the second?


Thank you very much for your help!

DreamHost and craft cms (no replies)

$
0
0
Hey folks,

I need some help dealing with DreamHost and installing Craft CMS (on DreamHost). I have a VPS (virtual private server) on DreamHost. I was told that I could have nginx, craft working on the VPS. But now I'm told that I do not have access to 'sudo', I can't configure default.conf. The DreamHost doc's suggest that I can create a conf by creating the file in domainName/nginx. The first problem is I don't know what is suppose to go into the conf file, a second problem what would be the name of the conf file.

Normally, the default.conf has very little in it and I setup a sitename.vhost that contains:

server {
listen 80;
# listen [::]:80 default_server;
listen 443 ssl http2;
# listen [::]:443 ssl http2;

# vhost specific logs.
access_log sitename/logs/craftcms.access.log;
error_log sitename/logs/craftcms.error.log error;

# Webroot directory
root sitename/public;
index index.html index.php;
server_name sitename


# SSL Configureations
include /etc/nginx/conf/ssl-craftcms-selfsigned.conf;

# Secure Configurations
include /etc/nginx/conf/secure-craftcms.conf;


# PHP Configurations
include /etc/nginx/conf/php-fpm.conf;
}

Since I don't even know where the php files are located and where php-fpm files are I'm at a lost.
I need some help and I'm hoping that someone understands the DreamHost setup.
Thanks for any help,
Johnf

Help with redirect (no replies)

$
0
0
Hi,
I want to redirect an url like
http://my-site.fr/?param1=xx&param2=xx&param3==xx to the homepage but that doesn't work.
Could you help me ?
I test many syntax but nothing work correctly
rewrite ^/?param1=xx&param2=xx&param3==xx http://my-site.fr permanent;

Could you help me ?

Thank you

Conditional secure_link (1 reply)

$
0
0
Hi everyone,

I need to set a new secured location in my nginx. For that I'm using http_secure_link_module.

If the URL contains the correct token I proceed to return the page to the user, and also add a "Set-Cookie" header status=is_authorized. If the token is incorrect I return and 403 status code to the user. Until here everything is working fine.

The thing is: I need this secure_link to be conditional, for example, if the user has the status cookie set as is_authorized the token won't be required to be a part of the url (or the secure_link directive to be disabled).

Is there a way to achieve that?

Here's what I've got so far:

=========================================================

location / {

if ($cookie_STATUS = "IS_AUTHORIZED") {
// if possible cancel secure_link for authorized (with cookie) users.
}

secure_link $arg_token;
secure_link_md5 "MD5_SECRET_PARAMETERS";

set $token $arg_token;

if ($secure_link = "") {
return 403;
}

if ($secure_link = "0") {
return 410;
}

add_header Set-Cookie STATUS=IS_AUTHORIZED;
add_header Set-Cookie TOKEN=$arg_token;

include /usr/local/nginx/conf/rewrite_rules.conf;

proxy_cache confluence_cache;
include /usr/local/nginx/conf/cache.conf;
# include /etc/nginx/shared/google_analytics.conf;

proxy_set_header Authorization "Basic BASE_64_HASH";
proxy_pass PROXY_URL;
}

=================================================

Thank you,
Raphael.

Is it ok to set up gzip compression with https? (no replies)

$
0
0
Hello,

I'm new to Nginx and taking a course on it. The course recommends turning on gzip compression and using caching. I see that in the Nginx configuration file it makes a note of a bug from 2014. The bug seems to note a security issue when using Nginx with ssl and gzip for compression. Is this bug still a problem? Is it save to use gzip compression with ssl? [I'm planning on implementing a lets encrypt cert down the road.]

Bug from the debian bug tracker
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=773332

I'm using ubuntu 16.04.2 LTS and nginx version
nginx/xenial,xenial,now 1.12.0-1+xenial1 all [installed]

in my nginx.conf
user www-data;
worker_processes auto;

pid /run/nginx.pid;

events {
worker_connections 1024;
}

http {
include mime.types;
default_type application/octet-stream;

#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
error_log /var/log/nginx_error.log error;
#access_log logs/access.log main;

sendfile on;
#tcp_nopush on;

keepalive_timeout 65;

# SSL
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # no sslv3 (poodle etc.)
ssl_prefer_server_ciphers on;

# Gzip Settings
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_min_length 512;
gzip_types text/plain text/html application/json application/javascript application/xml application/xml+rss application/x-javascript text/javascript application/javascript text/xml text/css application/font-sfnt;

fastcgi_cache_path /usr/share/nginx/cache/fcgi levels=1:2 keys_zone=microcache:10m max_size=1024m inactive=1h;

include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}


Thanks!
Viewing all 2931 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>