Quantcast
Channel: Nginx Forum - How to...
Viewing all 2931 articles
Browse latest View live

Multi wildcard certificates for multi wildcard domains (no replies)

$
0
0
Hi all,
This is my environment :
CentOS release 6.4 (Final) , nginx-1.8.1-1.el6.ngx.x86_64
[quote]
nginx -V
nginx version: nginx/1.8.1
built by gcc 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC)
built with OpenSSL 1.0.1e-fips 11 Feb 2013
TLS SNI support enabled
[/quote]
I have 2 web sites : website1 (multi sub domain abc.website1.com , xyz.website1.com) and website2 (single domain website2.com) , this is nginx configuration:
[quote]
server {
# website1 redirect http to https
listen ip:80;
server_name *.website1.com;
return 301 https://$host$request_uri;
}

server {
# website2 redirect http to https
listen ip:80;
server_name website2.com;
return 301 https://$host$request_uri;
}

server {
listen ip:443 ssl;
ssl_certificate path-to-website1-wildcard-certificate-file;
ssl_certificate_key path-to-website1-pricatekey-file;
ssl_session_cache shared:SSL:10m;
server_name *.website1.com;
...
}

server {
listen ip:443 ssl;
ssl_certificate path-to-website2-single-domain-certificate-file;
ssl_certificate_key path-to-website2-pricatekey-file;
ssl_session_cache shared:SSL:10m;
server_name website2.com;
...
}
[/quote]
Everything works fine. Now I purchased wildcard certificate for website2, so I change configuration :
[quote]
server {
# website1 redirect http to https
listen ip:80;
server_name *.website1.com;
return 301 https://$host$request_uri;
}

server {
# website2 redirect http to https
listen ip:80;
server_name *.website2.com;
return 301 https://$host$request_uri;
}

server {
listen ip:443 ssl;
ssl_certificate path-to-website1-wildcard-certificate-file;
ssl_certificate_key path-to-website1-pricatekey-file;
ssl_session_cache shared:SSL:10m;
server_name *.website1.com;
...
}

server {
listen ip:443 ssl;
ssl_certificate path-to-website2-wildcard-certificate-file;
ssl_certificate_key path-to-website2-pricatekey-file;
ssl_session_cache shared:SSL:10m;
server_name *.website2.com;
...
}
[/quote]
After reload, I can access to https://website1.com successfully but when I access to https://website2.com I get error about certificate points to wrong domain. I add exception and find out that nginx use website1 wildcard certificate for website2 requests/response.
I don't understand why nginx doesn't handle 2 different wildcard certificates for 2 different wildcard domains, is it normal ? Or I did something wrong ?
Now I have to change configuration with website2 to :
[quote]
server {
# website2 redirect http to https
listen ip:80;
server_name website2.com abc.website2.com xyz.website2.com;
return 301 https://$host$request_uri;
}

server {
listen ip:443 ssl;
ssl_certificate path-to-website2-wildcard-certificate-file;
ssl_certificate_key path-to-website2-pricatekey-file;
ssl_session_cache shared:SSL:10m;
server_name website2.com abc.website2.com xyz.website2.com;
...
}
[/quote]
to pass through problem temporary.
Can anyone give me some advice ? Thank you very much.

Rewrite (no replies)

$
0
0
I have a wordpress site that my client want to append a segment to always appear in the URL.

For example

http://rethink.test

will always contain:

http://rethink.test/community

or

http://rethink.test/about

http://rethink.test/community/about

etc.etc.

Could this be achieved in a server block.

how to redirect to Apache2 properly (1 reply)

$
0
0
I'm running Nginx Tomcat Apache2 Kibana Grok and Graphite on single server.

tomcat serves grok apache2 serves graphite nginx listen on port 80 and redirect.

My configuration is:

server {

listen 443 ssl default_server;
listen 80;

if ($scheme = http) {
return 301 https://$server_name$request_uri;
}

server_name myserver.org;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}

root /var/www/html/;

location / {
alias /var/lib/tomcat8/webapps/;
proxy_pass http://127.0.0.1:4180;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 5;
proxy_send_timeout 30;
proxy_read_timeout 30;
}

location /kibana/ {
proxy_ignore_client_abort on;
proxy_pass http://localhost:5601/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
}


location /graphite/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:81/;
}
}

Requests to localhost/kibana/ work fine.
Requests to localhost/graphite/ are served by tomcat instead of apache2.
If i go to localhost:81 my graphite is loaded.
What is wrong here?

Added Nginx to a Ubuntu 16.04 with Virtualmin now I'm fucked-up... (no replies)

$
0
0
I apologize, but the Dunning-Kruger Effect (https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect) say's I'm too dumb to know how to ask for, _or get help,_ correctly.

So here's where I'm at. I'm doing a thing on my own hardware at home behind DHCP. I use the Ubuntu 16.04 LTS Server operating system and manage it with the Virtualmin interface. I've only known Apache for web serving, so it pains me to learn something new. I have a few existing virtual servers running on my DMZ built with Virtualmin and a bunch of subdomains also built with Virtualmin. I decided to make my own subdomain with Webmin from the servers FQDN to run the BigBlueButton software, but they *ONLY* let you do it via Nginx.

After some false starts installing that program, I gave the port 80 and 443 to Nginx's control and my BigBlueButton works pretty good so far. Red-Herring:=(OAuth trouble with Google and HTML5 is fucked up on Ubuntu) So now, I'm using port 591 for Apache's http traffic and port 4433 to serve Apache's https sites. I've read a lot of blogs and posts about how to do this division of traffic and I almost had it working but the SSL sites known to Apache wouldn't serve correctly. The solution I read about which was supposed to fix that SSL problem made nothing on Apache work. So here's my hope...

I can undo the fiddling I did to my Nginx files. Is there an elegant way to have Nginx push all traffic it doesn't have a /etc/nginx/sites-enabled/* file for to Apache's workload? I just don't speak Nginx worth a shit and it really looks like anti-structured nonsense to me most of the time. It seems like I should put some blocks of code in the default /etc/nginx/nginx.conf file that redirects anything caught by the default server to Apache.

Attached is my last rendition of something in my /etc/nginx/sites-available/ folder with the symbolic link to it in /etc/nginx/sites-enabled/ that didn't work for an existing Apache Virtual Host bearing that file's name.

Oh geez, and I almost forgot that the Webmin/Virtualmin management system uses port 10000, but Virtualmin adds apache records for virtual servers to redirect something like admin.wnymathguy.com to https://wnymathguy.com:10000, so that might be different than whatever solves my problem stated above.

Multiple server blocks using same port (no replies)

$
0
0
Hello. I have a problem with Nginx. I have three websites running, and all using the same 443 port for SSL. When I try to restart Nginx, I get an error like the following:

root@vps:~# nginx
nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:443 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:443 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:443 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:443 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:443 failed (98: Address already in use)
nginx: [emerg] still could not bind()

Here is my nginx "default" file: https://hastebin.com/ejuwapafup.nginx

I had gotten it to work before on Ubuntu 16.04.3, but when I switched to Debian 9, it caused this issue.

If you can help, please do!

file upload (no replies)

$
0
0
I have installed nginx version: nginx/1.10.3 (Ubuntu)

A website requires to upload files. I use php 7 for handling the upload.

The form:
========

<div class="collapse" id="upload_avatar">
<div class="card card-body">
<form enctype="multipart/form-data" action="" method="post">
<p class="text-left">Upload Avatar:</p>
<input type="hidden" name="MAX_FILE_SIZE" value="300000" />
<input name="image" type="file" /><br>
<button class="form-control mr-sm-2 btn btn-outline-success my-2 my-sm-0" type="submit" name="avatar_upload" aria-controls="collapse_upload_avatar">
Upload
</button>
</form>
</div>
</div>

The php part:
===========

if(isset($_POST["avatar_upload"])){
$verifyimg = getimagesize($_FILES['image']['tmp_name']);

if($verifyimg['mime'] != 'image/png') {
echo "Only PNG images are allowed!";
exit;
}

$uploaddir = '/members/3/';
$uploadfile = $uploaddir . basename($_FILES['image']['name']);

if (move_uploaded_file($_FILES['image']['tmp_name'], $uploadfile)) {
echo "File is valid, and was successfully uploaded.<br>";
} else {
echo "Possible file upload attack!<br>";
}

echo '<pre>';
echo 'info:';
print_r($_FILES);
print "</pre>";
}

It prints out:
=========
Possible file upload attack!
info:Array
(
[image] => Array
(
[name] => Selection_001.png
[type] => image/png
[tmp_name] => /tmp/phpGpp3rB
[error] => 0
[size] => 299338
)
)
There is no /tmp/php* file

There is no file in the /members/3/ directory

The permission is 777 for /members and /members/3


nginx/error.log shows:
=================
PHP message: PHP Warning: move_uploaded_file(/members/3/Selection_001.png): failed to open stream: No such file or directory ... on line 197 PHP message:
PHP Warning: move_uploaded_file(): Unable to move '/tmp/phpGpp3rB' to '/members/3/Selection_001.png'


/etc/nginx/nginx.conf:
=================

user www-data;
worker_processes auto;
pid /run/nginx.pid;

events {
worker_connections 768;
# multi_accept on;
}

http {

##
# Basic Settings
##

sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;

# server_names_hash_bucket_size 64;
# server_name_in_redirect off;

include /etc/nginx/mime.types;
default_type application/octet-stream;

##
# SSL Settings
##

ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;

##
# Logging Settings
##

access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;

##
# Gzip Settings
##

gzip on;
gzip_disable "msie6";

# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

##
# Virtual Host Configs
##

include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}


The sites-available/example.com
==========================

server {
listen 80;
listen [::]:80;

root /home/ronald/docker-websites/example.com;

# Add index.php to the list if you are using PHP
index index.php index.html index.htm index.nginx-debian.html;

server_name example.com;

location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}

location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}

location ~ /\.ht {
deny all;
}


listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot


if ($scheme != "https") {
return 301 https://$host$request_uri;
} # managed by Certbot


}

What am I missing?

Server_Name redirecting to IP (no replies)

$
0
0
Hello,

Trying to figure out why when using a DNS entry for the "server_name" entry in the nginx/sites-available/ configuration files, it will still resolve an IP address in the web browser instead of serving up a 404?

This doesn't happen in any of my other environments which are set up the same way. Security audits say our production level webpage should never resolve to an IP Address. Only a DNS name.

Any thoughts of where I should start looking even, would be greatly appreciated.

~S

How to have SSL passthrough with source ip preservation (no replies)

$
0
0
Hey,

I would like to know how to have SSL passthrought (using map $ssl_preread_server_name) where I have one main load balancer forwarding the traffic to multiple Node.js servers. The Node.js servers don't have NGINX in front except the load balancer, so the SSL configurations are in Node.js, not NGINX.

The load balancer - entry NGINX config looks like that:

--
stream {
map $ssl_preread_server_name $name {
backend.example.com backend;
}

upstream backend {
server 192.168.0.1:443;
}

server {
listen 443;
proxy_pass $name;
ssl_preread on;
}
}
--

And I would like to set x-forwarded-for real ip so that the Node.js servers can get the client IPs (not the load balancer ip).

I saw there is proxy_protocol (within a stream), but it looks like it's not working with the kind of settings I am using (SSL credentiels directly in Node.js, not NGINX).

Any idea how to accomplish this?

Thanks

How to configure Nginx for Windows (no replies)

$
0
0
Hello, I am looking for a better alternative for my webserver other than Apache for my Windows server. When the server is under high loads at peek times (streaming server with 1000's of users accessing 2-3 MB files all at once) I find that it starts to serve files a lot slower and get timeouts. The system resources are not high at all so must be the web service (Apache) so I was wondering if Nginx would be able to handle this better? if so, what setting should I use? I was reading this config file here : https://github.com/denji/nginx-tuning but I think this is only for Linux because a lot of the settings are not in my conf file. Also, what line is it that directs the server to a different folder on my C: drive where I have my own Index.html and my other files?
Thanks for you time.

Proper syntax in nginx.conf (2 replies)

$
0
0
Hello, running Nginx for Windows but when I run it and then try to ping the server I get pinging replys but 504 server errors and cant get the index page to load. Do I have something wrong in my config file below?


location / {
# root html;
root C:/www/htdocs;
index C:/www/htdocs/index.html;
}

solved :) thanks (1 reply)

Known issues (no replies)

$
0
0
I just wanted to confirm by the issues stated below that there is no point in having set more than 1 Worker Process and no more than 1024 Worker connections and by putting anything more, Windows/Nginx will ignore it?
If I have external 2000 connections trying to start, they have to wait until one frees up even if I was to put 2048 Worker connections and Auto for Worker processes, right?

Known issues : http://nginx.org/en/docs/windows.html

Although several workers can be started, only one of them actually does any work.
A worker can handle no more than 1024 simultaneous connections.

What is meant by keepalive_timeout (no replies)

$
0
0
Hello, does this setting (keepalive_timeout) close only idle connections after the set time? if I set keepalive_timeout 5; this wont close current active connections if it is still downloading/working, right? I am trying to maximize the server for 1000's of users so I may have 600 active requests at the same time so I want to free up idle connections as soon as possible so I don't hit the 1024 max connections. So am I right, that this is the time in seconds to close stale/idle connections and wont affect active ones? Thanks

how to change system date (no replies)

$
0
0
what permissions do i need to set so that i can get nginx to have permission to update the server time? i have attempted to update sudoers with a www account. but, couldn't get it to go. also tried to chmod 777 /bin/date. that didn't work either.

any help would be greatly appreciated. thanks.

Can't solve this redirect :( (no replies)

$
0
0
Hi, I can't solve how this .htacces rule should be rewritten :( could someone please try to help me

in .htaccess
RewriteRule ^filters$ index.php?controller=filters [L,QSA]

Have tried the following rwdirects but all gives me the "The page isn’t redirecting properly"

rewrite ^filters$ /index.php?controller=filters last;
rewrite ^/filters/$ /index.php?controller=filters last;
rewrite ^/filters$ /index.php?controller=filters last;

Non of the above works, but checked it on apache, and there the rule works perfectly

HELP: Active Directory Authentication via SSS/PAM Integration (no replies)

$
0
0
Hi.

How can I get successful auth_pam authentications against Active Directory with nginx serving as a reverse proxy? I have nginx-full (1.10.3) installed on an Ubuntu 16.04 LTS EC2 instance. I've successfully joined the VM to an Active Directory domain and I'm able to successfully login to an SSH session using a domain user only defined in AD. If I use those same AD user credentials when navigating to a protected URL via webbrowser, I encounter a 401 error. However, using the credentials of a user local to the VM hosting nginx, I can authenticate and navigate to the protected URL. I've included pertinent config files below. What am I missing?

Thanks.

# /etc/pam.d/nginx
#
@include common-auth
# ###END /etc/pam.d/nginx

# /etc/pam.d/common-auth
#
# here are the per-package modules (the "Primary" block)
auth [success=2 default=ignore] pam_unix.so nullok_secure
auth [success=1 default=ignore] pam_sss.so use_first_pass
# here's the fallback if no module succeeds
auth requisite pam_deny.so
# prime the stack with a positive return value if there isn't one already;
# this avoids us returning an error just because nothing sets a success code
# since the modules above will each just jump around
auth required pam_permit.so
# ###END /etc/pam.d/common-auth

# /etc/nsswitch.conf
#
# Example configuration of GNU Name Service Switch functionality.
# If you have the `glibc-doc-reference' and `info' packages installed, try:
# `info libc "Name Service Switch"' for information about this file.
passwd: compat sss
group: compat sss
shadow: compat sss
gshadow: files
hosts: files dns
networks: files
protocols: db files
services: db files sss
ethers: db files
rpc: db files
netgroup: nis sss
sudoers: files sss
# ###END /etc/nsswitch.conf

# /etc/sssd/sssd.conf
#
[sssd]
domains = SUBDOMAIN.TLD
config_file_version = 2
services = nss, pam

[domain/SUBDOMAIN.TLD]
ad_domain = SUBDOMAIN.TLD
krb5_realm = SUBDOMAIN.TLD
realmd_tags = manages-system joined-with-adcli
cache_credentials = True
id_provider = ad
krb5_store_password_if_offline = True
default_shell = /bin/bash
ldap_id_mapping = True
use_fully_qualified_names = True
fallback_homedir = /home/%u@%d
access_provider = ad
simple_allow_groups = Domain Admins
ad_hostname = hostname.subdomain.tld
dyndns_update = True

# ###END /etc/sssd/sssd.conf

Drupla installing issue: "sites/default/files is not writable"? (no replies)

$
0
0
Hi,

I am trying to install Drupal on Gentoo/Linux with nginx web server.

However installer found an error of "Requirements problem" - "The directory sites/default/files is not writable."

Please inform how to grant permissions to web server? I am quite lost regarding web server permissions...

If any additional information is needed please let me know.

Same content, different headers (no replies)

$
0
0
I'm trying to configure nginx to achieve the following:

GET / -> returns index.html with additional header (e.g. X-ROOT = true)
GET /foo -> returns index.html with additiuonal header (e.g. X-FOO = true)
GET /v1/api/whatever -> everything which starts with /v1/api is passed to an upstream
GET /everything-else -> all the rest is served from disk, handled via try_files $url $url/ /index.html;

Below is my nginx.conf. All my responses contain the X-INDEX-HTML header. I have a vague idea why this is the case,
but I don't know the solution to have /foo return the contents of index.html with the X-FOO header.
Any help is appreciated.

server {
listen 80 default_server;

location /index.html {
add_header "X-INDEX-HTML" "true";

root /var/www/test;
try_files $uri $uri/ /index.html;
}

location = /foo {
add_header "X-FOO" "true";

root /var/www/test;
index /index.html;
}

location = / {
add_header "X-ROOT" "true";

root /var/www/test;
index /index.html;
}

location / {
root /var/www/test;
try_files $uri $uri/ /index.html;
}

location /v1/api/ {
proxy_pass http://my.upstream.net;
}
}

Load Balancing VIP (no replies)

$
0
0
Hi,

I want to set up load balancing with NGINX on our AWS servers currently already running NGINX as a proxy, but I just want to clarify that the single IP (VIP) of the load balancer is all that will be returned when accessing a website on any of the load balanced backend servers.

Thanks,
Jamesy

[Loadbalancing] error 404 but backend work fine (2 replies)

$
0
0
Hello all,

I try to load balance my web app using nginx but I get 404 instead my app.
All my backend (MS IIS) work fine when I try to query directely.

Can you help me please ?

Here is header when I query my IIS backend directely
See attached picture

Here is my nginx config:

upstream mywebapp {
ip_hash;
server 10.236.10.21:80;
server 10.236.10.22:80;
server 10.236.10.23:80;
server 10.236.10.24:80;
server 10.236.10.25:80;
server 10.236.10.26:80;
keepalive 16;
}
server {
listen 443 ssl;
server_name test.mywebapp.fr;
#client_max_body_size 10m;
ssl on;

location / {
proxy_pass http://mywebapp.fr;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host:443;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Port 443;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Connection "";
proxy_set_header Connection "";
proxy_read_timeout 60m;
proxy_pass_request_headers on;
}
}
Viewing all 2931 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>