Quantcast
Channel: Nginx Forum - How to...
Viewing all 2931 articles
Browse latest View live

Reverse proxy with URI rewrite (no replies)

$
0
0
Hi, sorry for posting this kind of question again, as it looks like there would be enough ogf them already. :(
Well I'm stupid enough not to catch the point. I just to make this work, but it doesn't anymore.

I want requests for
http://webmail.domain.com
I want the Reverse Proxy service to fetch it from this address
http://webmail.domain.com/sub/folder

Thats all!

client_body_temp_path umask (no replies)

$
0
0
location /upload {
client_body_temp_path /var/www/staging/;
#use clean in prod
client_body_in_file_only on;
client_body_buffer_size 128K;
client_max_body_size 100M;

proxy_pass_request_headers on;
proxy_set_header X-FILE $request_body_file;
proxy_set_body off;
proxy_redirect off;
proxy_pass http://localhost/process;
}

files created by nginx:
2014/02/26 21:47:23 [notice] 4533#0: *1 a client request body is buffered to a temporary file /var/www/staging/0000000001, client: 127.0.0.1, server:.com, request: "POST /upload HTTP/1.1", host: "localhost"

created file has owner nobody and very restrictive permissions
-rw------- 1 nobody admin 140257 26 Feb 21:47 0000000001

I'd like to read the file and process it's contents in the backend but I can't seem to figure out how to tell nginx to use a different umask (022) for the files it creates. Can anybody help me please?

cheers
Ronny

How to limit stream speed (no replies)

$
0
0
I have implemented a limit_rate speed for my mp4 stream, and this working fine,

(my mp4 config)

location ~ \.mp4$ {
secure_link $arg_s,$arg_e;
secure_link_md5 sifra$uri$arg_e;
if ($secure_link = "") {
return 403;
}
if ($secure_link = "0") {
return 403;
}

limit_rate_after 5m;
limit_rate 400k;
#gzip off;
mp4;
}

But on many sites i can found something like this ..
http://site.com/high.flv?s=1393534948&e=1393549348&ri=5000&rs=70&r=site.com&h=d39bc2dd269583dc1f828111e5ed4786&ev=1

arg. rs is limit_rate as i can see.. because when i try download i can see max 70KB/s download speed at my download menager ... but when i try change for example rs=75 or any other value i have error 403 ...

Also different videos have different rs value as i can see...

I want make something like this on my site, but i need more info about example above...? How to control limit rate via url ?

Help with Nginx GeoIP load balancing with failover (no replies)

$
0
0
I am attempting to set up a test Nginx load balanced environment. So far I have sucessfully configured a load balancer nginx-balancer1 and 3 servers to serve webpages nginx1, nginx2 & nginx3.

I want to balance the load by region depending on the visitor's IP. I have configured my Nginx nginx-balancer1 to use the Maxmind GeoIP Country data.

So here is my configuration to the upstream servers:

### START

# Check where the user is coming from
server {
location / {
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504 http_404;
proxy_connect_timeout 2;

if ($geoip_city_continent_code = "EU") {
proxy_pass http://ams1;
}
if ($geoip_city_continent_code = "NA") {
proxy_pass http://sfo1;
}
if ($geoip_city_continent_code = "AS") {
proxy_pass http://sgp1;
}

}
}

# Define upstream servers
upstream ams1 { server server1.example.com max_fails=3 fail_timeout=10s; }
upstream sfo1 { server server1.example.com max_fails=3 fail_timeout=10s; }
upstream sgp1 { server server1.example.com max_fails=3 fail_timeout=10s; }

### END

This seems to work well, however if I shutdown nginx on say ams1 (server1.example.com) and try to go to the main page I receive a 502 Bad Gateway error.

What I want to figure out is if a server is down, how can I get nginx-balancer1 to redirect to another server, either the next closest or the next functioning server.

Generic log error for this is: connect() failed (111: Connection refused) while connecting to upstream.

Can anybody help?

Thanks

How to get Nginx virtual host to work? (no replies)

$
0
0
Got my question up here, can't access my website on my public IP:

http://serverfault.com/questions/578762/how-to-make-nginx-virtual-host-work-page-not-found

alias and nested location (no replies)

$
0
0
I have a location to match php files:

# PHP handler
location ~ \.php {
## Catch 404s that try_files miss
if (!-e $request_filename) { rewrite / /index.php last; }

## Store code is defined in administration > Configuration > Manage Stores
fastcgi_param MAGE_RUN_CODE default;
fastcgi_param MAGE_RUN_TYPE store;
fastcgi_param HTTPS $fastcgi_https;
rewrite_log on;

# By default, only handle fcgi without caching
include conf/magento_fcgi.conf;
}

I need to setup an alias so requests like /coolapp/ go to the coolapp located outside the document root. The only problem is when these requests are for .php files, the location I have above is matched instead. I was insctruced to take the location matching for .php files above and nest it in my location /coolapp/, but nginx is now giving me 404 File not found for php files. My location for /coolapp/ looks like this

location /coolapp/ {
alias /var/www/apps/coolapp/;
location ~ \.php {
# Copied from "# PHP Handler" below
fastcgi_param MAGE_RUN_CODE default;
fastcgi_param MAGE_RUN_TYPE store;
fastcgi_param HTTPS $fastcgi_https;
rewrite_log on;

# By default, only handle fcgi without caching
include conf/magento_fcgi.conf;
}
}

conf/magento_fcgi/conf looks like this:

fastcgi_pass phpfpm;

## Tell the upstream who is making the request
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;

# Ensure the admin panels have enough time to complete large requests ie: report generation, product import/export
proxy_read_timeout 1600s;

# Ensure PHP knows when we use HTTPS
fastcgi_param HTTPS $fastcgi_https;

## Fcgi Settings
include fastcgi_params;
fastcgi_connect_timeout 120;
fastcgi_send_timeout 320s;
fastcgi_read_timeout 1600s;
fastcgi_buffer_size 128k;
fastcgi_buffers 512 64k;
fastcgi_busy_buffers_size 128k;
fastcgi_temp_file_write_size 256k;
fastcgi_intercept_errors off;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /var/www/apps/coolapp$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
# nginx will buffer objects to disk that are too large for the buffers above
fastcgi_temp_path /tmpfs/nginx/tmp 1 2;
#fastcgi_keep_conn on; # NGINX 1.1.14
expires off; ## Do not cache dynamic content

Here are the error messages I'm seeing in nginx error log:

2014/02/28 11:10:17 [error] 9215#0: *933 connect() failed (111: Connection refused) while connecting to upstream, client: x.x.x.x, server: mysite.com, request: "GET /coolapp/test.php HTTP/1.1", upstream: "fastcgi://[::1]:9000", host: "www.mysite.com"
2014/02/28 11:10:17 [error] 9215#0: *933 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: x.x.x.x, server: mysite.com, request: "GET /coolapp/test.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.mysite.com"
2014/02/28 11:11:59 [error] 9220#0: *1193 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: x.x.x.x, server: mysite.com, request: "GET /coolapp/test.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.mysite.com"

one server with ssl, and one without (no replies)

$
0
0
I am having some issues setting up two servers/virtual hosts on the same machine, I want ex1.example.com to direct to the first, and ex2.example.com to direct to the second (this is not the issue).

The first server is a bit complicated, proxying a waitress webserver, serving some static files, and uses basic auth for some urls - most importantly it requires ssl:

server {
listen 80;
listen [::]:80;

listen 443 ssl;
ssl on;
ssl_certificate /path/to/server.crt;
ssl_certificate_key /path/to/server.key;

# Make site accessible from http://localhost/
server_name ex1.example.com;
...
}

The second, ex2, is a simple setup, proxying an apache server and doesn't need ssl:

server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;

# Make site accessible from http://ex2.example.com/
server_name ex2.example.com;
...
}

I have a similar setup running on an older ubuntu (precise) server (nginx version 1.1.19), where the above setup works fine.
On my ubuntu saucy server (nginx version 1.4.1) I have some serious issues, i hope someone can help.

Directing my browser to both http://ex1.example.com and http://ex2.example.com works fine - confirming that DNS is set up correctly for both.
Directing my browser to https://ex1.example.com gives me a "this website is not available" message in the browser, and no entries in the nginx error or access logs - this could be many things, but:
Directing a browser on the server to https://localhost gives the correct page! (I see what I expected to see when going to https://ex1.example.com except for an error with my ssl certificate not covering localhost, of course).

I am going crazy trying to debug this - has anyone got any suggestions?
Thank you in advance,
Mads

Nginx - Windows 2008 (1 reply)

$
0
0
I need to know whether Nginx is supported on windows 2008 32 and 64 bit? Sorry i am new to nginx. Can you please point me to related documents.

Thanks in advance.

nginx not caching request from IIS 6 (no replies)

$
0
0
nginx reverse proxy to IIS6 running asp and sql server not caching requests

nginx.conf is as follow

--------------------------------------------------------------------------------------------------------------------------------------------
For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/

#user nginx;
worker_processes 1;

error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;

pid /run/nginx.pid;


events {
worker_connections 1024;
}


http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

proxy_cache_path /nginx/cache levels=1:2 keys_zone=mycache:1024m
max_size=1500m inactive=600m;
proxy_cache_key "$scheme://$host$request_uri";
proxy_ignore_headers "Set-Cookie";



log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

--------------------------------------------------------------------------------------------------------------------------------------------

def.conf under conf.d is as follows

server {
listen 80;
server_name localhost.localdomain;

location / {
proxy_pass http://192.168.11.150:80;
proxy_cache_valid 200 302 10m;
proxy_cache_valid 404 1m;
proxy_cache_bypass $arg_nocache;
proxy_cache_bypass $http_pragma $http_authorization;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 512m;
}

--------------------------------------------------------------------------------------------------------------------------------------------

any clues? im new to nginx and playing around with it to be familiar with it at least

running on fedora 20

how to serve different content from two nics on the same server (no replies)

$
0
0
I have an appliance with two nics and want to serve different content through each nic. One nic is private and another public and each exposes very different functionality. There are other ways to get at the eventual goal, including adding authentication for the "private" functionality but for various reasons, this is the current approach.

The best way I have found to accomplish this is by listening on the two ip addresses via "listen" directives in two different server blocks.

Is there a better way to serve different dynamic content on two nics in the same server?

The reason I ask: Listing the ip addresses thus in the configuration file necessitates an external script that automatically edits the file whenever there is an ip change. There isn't a universally reliable way to do this (I'm using if-up.d/) in linux, or is there?

Thank you for any pointers.

How to setup nginx to rate limit behind an AWS elb? (no replies)

$
0
0
How can I set up nginx to use the rate limiting module behind an AWS elb?

Which variable I should be using for the key that captures the http-forwarded-ip and stuff?

Set nginx load balancing in two servers with ldap server together fail (no replies)

$
0
0
Hi :

This cluster are two servers only all LB+ applications installed there.

I set two nginx LB with ldap 389 servers port but nginx bind my ldap port 389 and case failure turn on the ldap ( i set loop back interface alias the virtual ip) .. (If i used IPVS ultamonkey this model can be achived )

Is nginx must dedicated on separate server that out of my applications? meant 3 servers at least ..1 balance + 2 ldaps
Seem this nginx msut occupy the ports in order to forward ....it 's not like other layer 7 LB .


Server 1 Server 2

nginx+ ldap nginx + ldap

Installed using puPHPet -- 403 and 500 error (2 replies)

$
0
0
I've run through the puPHPet creation of a manifest (using Vagrant and Puppet) - anyway, my question is sort of basic, I hope:

I'm running nginx version: nginx/1.4.5 and I can't seem to locate where my sites-available directory is. I checked /etc/nginx and all that is there is (ls -a /etc/nginx):
./ ../ conf.d/ conf.mail.d/ fastcgi_params koi-utf koi-win mime.types nginx.conf scgi_params uwsgi_params win-utf

My other concern is that my root directory is in /var/www/project-name, not /var/ww/html -- but I'm assuming that by editing my sites-available/project-name (when I find it) can take care of that for me?

I'm new to nginx and I build WordPress sites -- so I'm moving from Apache to nginx. Any help is greatly appreciated!

3 quick questions (no replies)

$
0
0
Hi there-

I had 3 questions about nginx functionality:

a) suppose I have a location block in which I'd like to trigger some sort of script or 'notify' another location...would the best way be to use the HttpLuaModule, and execute a script within the location block? Or is there a simpler way?

b) I'd like to do a setup where a single nginx is handling 4 other nginx nodes in a round-robin fashion. This will be to load-balance SSL encryption, where the first hop (I assume with proxy_pass being the best way) is not secure, but each individual node handles it's own separate SSL encryption.
See the crude image attached...basically A/B/C/D are duplicate locations /generally identical ,and each handle the request made by the initial node separately, and then fan out as appropriate.

c) On the topic of b), is there a way for a location to 'know' where it was referenced from? For example, if I got to this location from location X, I'd like to enforce SSL encryption, but from location Y, encryption is not necessary etc.

Thanks, and apologies if these questions are vague...still super new.

Is it possible to get nginx to skip check of virtual ip's? (no replies)

$
0
0
We've got two nginx servers running as loadbalancing proxies.
Due to some of our customers lack of SNI support we've got to set a few different virtual IP's and specified them on each sites config.

This works well but the problem is that keepalive only brings the ip's when its active.
and nginx tries to check all ip's when it starts. so the babkup proxy cannot start du tue the fact that the ip's are on the master proxy.

Is it possible to force a start anyway or maybe disable the IP check during startup?

Thnx in advance

Rewrite Help (no replies)

$
0
0
Hi there,

I'm a new NGINX user, and love it (mostly) so far.. the problem is converting my old apache rewrite to nginx.. i'm unable to make the following things work and need a hand (and am willing to pay for help if it's a complex thing)..

I am using the YII framework with NGINX, and I have the YII bootstrap redirects in place and working.

Issue 1
we have a dev site and live site. i only want our ips to access the dev site. So any unauthorized ip should be directed to the live site, preferrably with a 301 redirect error so the search engines no longer index the dev site.

Issue 2
if i have a path like site/folder or site/folder/ i get a 404 error, for some reason the index.php or the index.html isn't being read at all, even though i have a location directive:

location /folder/ {
if (!-e $request_filename) {
rewrite ^(.*)$ /folder/index.php;
}

Issue 3
I'd like to redirect /account and /signup pages to https.

So if anyone can help answer this I would be very happy.

Thank you very much, and I hope I can be contributing soon and not just asking questions.

Help Configuring NGINX with PHP-FPM (no replies)

$
0
0
Hello, I have a few questions about getting NGINX and PHP-FPM working properly together. I will post my config files for you all to take a look. Essentially what is happening is the root directory and anything underneath the root directory seems to be processing php files fine. That root directory is /usr/share/nginx/html. Now I would like to add phpmyadmin a level up from the root directory, so /usr/share/nginx/phpmyadmin. When I create the alias for phpmyadmin or any other alias I want to make, php processing fails and it states it cannot find the file. Is there a way to get php to process properly no matter where you are? I got it working with the root directory by adding fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; and I think that is also why the aliases do not work because $document_root points to /usr/share/nginx/html and not /usr/share/nginx/phpmyadmin. If I get rid the the fastcgi_param I do not get any errors but I do get blank pages with no processed php. Thanks in advance.

dav delete gets 401 (1 reply)

$
0
0
Hi,

I've a very simple nginx setup with the standard dav module:

client_body_temp_path /tmp/nginx-client-tmp 1 2;
create_full_put_path on;
client_max_body_size 50m;
dav_access user:rw group:rw all:r;
dav_methods PUT DELETE;

PUT and GET work as expected but DELETE returns 401:

"DELETE /file.name HTTP/1.1" 401

How to fix this?

TIA

Multiple nginx.conf files - how do choose? (no replies)

$
0
0
I compiled nginx from source on Ubuntu 13.* as this is my first foray into NGINX. Once i got the server successfully up and running, I had troubles getting it to obey server contexts for a site I already had setup on the server (which was previously running under apache).

I originally modified a file cat /etc/nginx/nginx.conf which DID NOT affect server behavior. AFter reading some additional forums I located an additional nginx.conf file at:
/usr/local/nginx/conf/nginx.conf. Adding the server contexts to this helped. However, I'm wondering what the one under etc/nginx is, and noticed that there are also standalone serve context type files located in /etc/nginx/sites-available and sites-default (much like apache), but i edited these before the one at /usr/local and they didn't work either.

Can someone tell me what the best practice is for defining configuration, specifially server contexts, and why there are multiple locations/config files where this seems to be applicable? Again, the only one that affected server behavior seems to be the default one at /usr/local/nginx/conf/

Any insight from those more experienced would be very appreciated...

Check if NFS mount is active before serving (no replies)

$
0
0
Hello.

Anyone has idea how to check if NFS drive is mounted and alive before serve files from it ?

If drive is not alive or not mounted, i want to use copy of the files from NFS mount which is cached with cachefs, but for that i need to change my root directory dynamic based on NFS status.

Edit: May be some analogue in nginx of PHP function is_writable() ?

Thanks in advance
Viewing all 2931 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>