9 Aralık 2014 Salı

Nginx, Django, Gunicorn & Mysql Installation and Configuration on Debian 7

Nginx, Django, Gunicorn & Mysql Installation and Configuration

Step One;
Update packages
#apt-get update
#apt-get upgrade

Step Two;
Install and create virtualenv
#apt-get install python-virtualenv python-dev
#source virtualenv /opt/myenv
Notice that a new directory "myenv" was created in the "/opt" directory. This is where our virtualenv will live. Make sure to replace "/opt/myenv" with the path to where you want your virtualenv installed. I typically put my env's in /opt, but this is strictly preference. Some people create a directory named "webapps" at the root of the VPS. Choose whatever method makes the most sense to you.

Step Three ;
Install Django
#source /opt/myenv/bin/activate
You should now see that "(myenv)" has been appended to the beginning of your terminal prompt. This will help you to know when your virtualenv is active and which virtualenv is active should you have multiple virtualenv's on the VPS.
With your virtualenv active, we can now install Django. To do this, we will use pip, a Python package manager much like easy_install. Here is the command you will run:
(myenv)root@Django:/opt/myenv/bin#pip install django
You now have Django installed in your virtualenv! Now let's get our database server going.

Step Four ;
Install Mysql Server
Since we don't need our virtualenv active for this part, run the following command to deactivate:
(myenv)root@Django:/opt/myenv/bin#deactivate
This will always deactivate whatever virtualenv is active currently. Now we need to install dependencies for Mysql to work with Django with this command:
#apt-get install python-mysqldb libmysqlclient-dev mysql-server


Step Five;
Install Nginx
#apt-get install nginx

Step Six;
Install Gunicorn
Gunicorn is a very powerful Python WSGI HTTP Server. Since it is a Python package we need to first activate our virtualenv to install it. Here is how we do that:
#source /opt/myenv/bin/activate
Make sure you see the added "myenv" at the beginning of your terminal prompt. With your virtualenv now active, run this command:
(myenv)root@Django:/opt/myenv/bin# pip install gunicorn
Gunicorn is now installed within your virtualenv.
If all you wanted was to get everything installed, feel free to stop here. Otherwise, please continue for instructions on how to configure everything to work together and make your app accessible to others on the web.
(myenv)root@Django:/opt/myenv/bin#deactivate

Step Seven;
Configure Mysql
#mysql -u root -p 
>CREATE DATABASE djangodb;
>CREATE USER 'django'@'localhost' IDENTIFIED BY 'passwd';
>GRANT ALL PRIVILEGES ON djangodb . * TO 'django'@'localhost';
>FLUSH PRIVILEGES;

Step Eight;
Create a Django project
In order to go any further we need a Django project to test with. This will allow us to see if what we are doing is working or not. Change directories into the directory of your virtualenv (in my case /opt/myenv) like so:
#cd /opt/myenv
Now make sure your virtualenv is active. If you're unsure then just run the following command to ensure you're activated:
#source /opt/myenv/bin/activate
With your virtualenv now active, run the following command to start a new Django project:
(myenv)root@Django:/opt/myenv/bin#django-admin.py startproject myproject
You should see a new directory called "myproject" inside your virtualenv directory. This is where our new Django project files live.
In order for Django to be able to talk to our database we need to install a backend for Mysql. Make sure your virtualenv is active and run the following command in order to do this:
(myenv)root@Django:/opt/myenv/bin# pip install mysql-python
Change directories into the new "myproject" directory and then into it's subdirectory which is also called "myproject" like this:
(myenv)root@Django:/opt/myenv/bin#cd /opt/myenv/myproject/myproject
Edit the settings.py file with your editor of choice:
(myenv)root@Django:/opt/myenv/myproject/myproject#nano settings.py
Find the database settings and edit them to look like this:
DATABASES = {
        'default': {
'ENGINE': 'django.db.backends.mysql', 
'NAME': 'djangodb',
'USER': 'django',
'PASSWORD': 'passwd',
'HOST': 'localhost',   # Or an IP Address that your DB is hosted on
'PORT': '3306',
  }
}
Save and exit the file. Now move up one directory so your in your main Django project directory 
#cd /opt/myenv/myproject
Activate your virtualenv if you haven't already with the following command:
#source /opt/myenv/bin/activate
With your virtualenv active, run the following command so that Django can add it's initial configuration and other tables to your database:
(myenv)root@Django:/opt/myenv/myproject/#python manage.py syncdb
You should see some output describing what tables were installed, followed by a prompt asking if you want to create a superuser. This is optional and depends on if you will be using Django's auth system or the Django admin.

Step Nine;
Configure Gunicorn
First lets just go over running Gunicorn with default settings. Here is the command to just run default 
#gunicorn_django --bind yourdomainorip.com:8001
Be sure to replace "yourdomainorip.com" with your domain, or the IP address of your VPS if you prefer. Now go to your web browser and visit yourdomainorip.com:8001 and see what you get. You should get the Django welcome screen.
If you look closely at the output from the above command however, you will notice only one Gunicorn worker booted. What if you are launching a large-scale application on a large VPS? Have no fear! All we need to do is modify the command a bit like so:
#gunicorn_django --workers=3 --bind yourdomainorip.com:8001
Now you will notice that 3 workers were booted instead of just 1 worker. You can change this number to whatever suits your needs.
Since we ran the command to start Gunicorn as root, Gunicorn is now running as root. What if you don't want that? Again, we can alter the command above slightly to accomodate:
#gunicorn_django --workers=3 --user=nobody --bind yourdomainorip.com:8001
If you want to set more options for Gunicorn, then it is best to set up a config file that you can call when running Gunicorn. This will result in a much shorter and easier to read/configure Gunicorn command.
You can place the configuration file for gunicorn anywhere you would like. For simplicity, we will place it in our virtualenv directory. Navigate to the directory of your virtualenv like so:
#cd /opt/myenv
Now open your config file with your preferred editor (nano is used in the example below):
#nano gunicorn_config.py
Add the following contents to the file:
command = '/opt/myenv/bin/gunicorn'
pythonpath = '/opt/myenv/myproject'
bind = '127.0.0.1:8001'
workers = 3
user = 'nobody'
Save and exit the file. What these options do is to set the path to the gunicorn binary, add your project directory to your Python path, set the domain and port to bind Gunicorn to, set the number of gunicorn workers and set the user Gunicorn will run as.
In order to run the server, this time we need a bit longer command. Enter the following command into your prompt:
#/opt/myenv/bin/gunicorn -c /opt/myenv/gunicorn_config.py myproject.wsg
You will notice that in the above command we pass the "-c" flag. This tells gunicorn that we have a config file we want to use, which we pass in just after the "-c" flag. Lastly, we pass in a Python dotted notation reference to our WSGI file so that Gunicorn knows where our WSGI file is.
Running Gunicorn this way requires that you either run Gunicorn in its own screen session (if you're familiar with using screen), or that you background the process by hitting "ctrl + z" and then typing "bg" and "enter" all right after running the Gunicorn command. This will background the process so it continues running even after your current session is closed. This also poses the problem of needing to manually start or restart Gunicorn should your VPS gets rebooted or were it to crash for some reason. To solve this problem, most people use supervisord to manage Gunicorn and start/restart it as needed. Installing and configuring supervisord has been covered in another article which can be found here.
Lastly, this is by no means an exhaustive list of configuration options for Gunicorn. Please read the Gunicorn documentation found at gunicorn.org for more on this topic.

Step Ten;
Configure Nginx
#service nginx restart
Since we are only setting NGINX to handle static files we need to first decide where our static files will be stored. Open your settings.py file for your Django project and edit the STATIC_ROOT line to look like this:
STATIC_ROOT = "/opt/myenv/static/" 
Add to /opt/myenv/myproject/myproject/settings.py
#nano /etc/nginx/sites-available/myproject
server {
        server_name yourdomainorip.com;

        access_log off;

        location /static/ {
            alias /opt/myenv/static/;
        }

        location / {
                proxy_pass http://127.0.0.1:8001;
                proxy_set_header X-Forwarded-Host $server_name;
                proxy_set_header X-Real-IP $remote_addr;
                add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
        }
    }
#cd /etc/nginx/sites-enabled
#ln -s ../sites-available/myproject
#rm default
#service nginx restart
And that's it! You now have Django installed and working with Mysql and your app is web accessible with NGINX serving static content and Gunicorn serving as your app server. If you have any questions or further advice, be sure to leave it in the comments section.








15 Eylül 2014 Pazartesi

Puppet kurulumu ve ayarları

Puppet
Puppet bir sistem otomasyon aracıdır.
Örnek vermek gerekirse 10 tane sunucuyu kurup yönetmek kolay ve zahmetsiz görülebilir, konfiurasyon dosyalarını tek tek düzenleyebilirsiniz .
Bu sayı artış göstermeye başladığı zaman bir süre sonra sorunlar ve zorluklar çekmeye başlayabilirsin Böyle durumlar da imdada  puppet yetişiyor ve bir çok külfetten kurtulmuş oluyorsunuz.

Kurulum için ihtiyaç listesi :))
VirtualBox,
Debian 7.6 netinstall,







Puppet master ve agent tarafında yapılması gerekenler;

Dns adlarını lokal de yaptığımız için hosts dosyası içine eklemek ve makina adları ile ping attığına emin olmak.
Puppet Master ;
#nano /etc/hosts
127.0.0.1       localhost
127.0.1.1       deb7.6  deb7
10.1.0.172      puppetagent

Puppet Agent;
#nano /etc/hosts
127.0.0.1       localhost
127.0.1.1       deb7.6  deb7
10.1.0.171      puppetmaster

Ntp kurmamız şart değil bu test için manual olarakta aynı nette ki ntp serverlardan eşitlersini fakat gerçek zamanlı olarak sisteminize entegre etmeniz gerekiyorsa bi time server şart ve master agent ilişkisin de zamanların birbirini tutması gerekli.

Puppet Master Kurulumu;

Puppet Master ;
# wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
#dpkg -i puppetlabs-release-trusty.deb
#apt-get update
#apt-get install puppetmaster-passenger

Peki neden puppet passenger kurduk onu bir anlatayım.
Passenger kurduğumuz da processler apache tarafından kontrol ediliyor yani apache çalışıyorsa puppet da çalışıyor demektir.

Sertifikaları silelim.
#rm -rf /var/lib/puppet/ssl
Puppetın temel konfigurasyonunu yapalım;
#/etc/puppet/puppet.conf
[main] logdir=/var/log/puppet vardir=/var/lib/puppet ssldir=/var/lib/puppet/ssl rundir=/var/run/puppet factpath=$vardir/lib/facter templatedir=$confdir/templates [master] # These are needed when the puppetmaster is run by passenger # and can safely be removed if webrick is used. ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY

Buna ek olarak "certname = puppet" kısmını da main'in altına yazmamız gerekli ve fqdn girmemiz gerekli main altına "dns_alt_names = puppet, puppetmaster".

Kaydet ve çık.

Şimdi ssl cert generate edelim .
#puppet master --verbose --no-daemonize
Aşağıda ki gibi bir output vermesi gerekli;
Info: Creating a new SSL key for ca
Info: Creating a new SSL certificate request for ca
Info: Certificate Request fingerprint (SHA256): EC:7D:ED:15:DE:E3:F1:49:1A:1B:9C:D8:04:F5:46:EF:B4:33:91:91:B6:5D:19:AC:21:D6:40:46:4A:50:5A:29
Notice: Signed certificate request for ca
...
Notice: Signed certificate request for puppet
Notice: Removing file Puppet::SSL::CertificateRequest puppet at '/var/lib/puppet/ssl/ca/requests/puppet.pem'
Notice: Removing file Puppet::SSL::CertificateRequest puppet at '/var/lib/puppet/ssl/certificate_requests/puppet.pem'
Notice: Starting Puppet master version 3.6.2
Eğer bakmak isterseniz şöyle bir sertifikaya .
#puppet cert list -all

Aşağıda ki şekil de bir dosya oluşturalım , bu dosya hostların kurulum ve ayarlarını nasıl olacağını belirlediğimiz yer.

Son olarak ;
#service apache2 restart

Puppet Agent;

# wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
#dpkg -i puppetlabs-release-trusty.deb
#apt-get update
#apt-get install puppet
Aşağıda açtığımız dosyada ki değeri "yes" olarak değiştiriyoruz.
#nano /etc/default/puppet
START = yes

Agentın ayarlarını yapalım.
#nano /etc/puppet/puppet.conf
Template ve master kısmını siliyoruz.
[Agent]
Server=puppetmaster
#service puppet start


Puppet master;
Agentdan gelen isteği onaylamak için puppet master da sertifikayı imzalıyoruz.

#puppet cert list
"puppetagent.local"(SHA256) B1:96:ED:1F:F7:1E:40:53:C1:D4:1B:3C:75:F4:7C:0B:A9:4C:1B:5D:95:2B:79:C0:08:DD:2B:F4:4A:36:EE:E3
#puppet cert sign puppetagent.local
sertifikayı kaldırmak için ;
#puppet cert clean "hostadı"

Puppet agenta geçip agent tarafını test etmek için ;
#puppet agent --test
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts in /var/lib/puppet/lib/facter/pe_version.rb
Info: Loading facts in /var/lib/puppet/lib/facter/puppet_vardir.rb
Info: Loading facts in /var/lib/puppet/lib/facter/root_home.rb
Info: Loading facts in /var/lib/puppet/lib/facter/facter_dot_d.rb
Info: Caching catalog for hostadı
Info: Applying configuration version '1407966707'
Sistem testi için ;
#nano /etc/puppet/manifests/site.pp
file {'/tmp/example-ip':                                            # resource type file and filename
  ensure  => present,                                               # make sure it exists
  mode    => 0644,                                                  # file permissions
  content => "Here is my Public IP Address: ${ipaddress_eth0}.\n",  # note the ipaddress_eth0 fact
}
Agent tarafını test ettiğimiz de;
#cat /tmp/example-ip
Here is my Public IP Address: 128.131.192.11.

15 Ağustos 2014 Cuma

9 Temmuz 2014 Çarşamba

OpenStack Icehouse Kurulumu Ubuntu 12.04 part 3

Configure Compute Node

Compute node ayarları;
Aşağıda ki paketleri indirirken Supermin evet dememiz gerekiyor.

#apt-get install nova-compute-kvm python-guestfs
""Supermin 'Yes'

#
#dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-$(uname -r)
Statoverride dosyasını oluşturalım açılışta yukarıda ki komut çalışması için.
# nano /etc/kernel/postinst.d/statoverride
#!/bin/sh
version="$1"
# passing the kernel version is required
[ -z "${version}" ] && exit 0
dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-${version}


# chmod +x /etc/kernel/postinst.d/statoverride

#nano /etc/nova/nova.conf
/etc/nova/nova.conf
[DEFAULT]
rpc_backend = rabbit
rabbit_host = Controller
rabbit_password = RABBIT_PASS
auth_strategy = keystone
my_ip = 10.0.0.31
vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 10.0.0.31
novncproxy_base_url = http://Controller:6080/vnc_auto.html
glance_host = Controller

[database]
# The SQLAlchemy connection string used to connect to the database
connection = mysql://nova:nova@Controller/nova
[keystone_authtoken]
auth_uri = http://Controller:5000
auth_host = Controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = NOVA_PASS


#nano /etc/nova/nova-compute.conf
[libvirt]
...
virt_type = qemu

#egrep -c '(vmx|svm)' /proc/cpuinfo
#rm /var/lib/nova/nova.sqlite
#service nova-compute restart

Network (Legacy)

###Controller Node
#nano  /etc/nova/nova.conf
[DEFAULT]
...
network_api_class = nova.network.api.API
security_group_api = nova
###
# service nova-api restart ; service nova-scheduler restart ; service nova-conductor restart

####Compute Node

#apt-get install nova-network nova-api-metadata

#
#nano /etc/nova/nova.conf
[DEFAULT]
...
network_api_class = nova.network.api.API
security_group_api = nova
firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
network_manager = nova.network.manager.FlatDHCPManager
network_size = 254
allow_same_net_traffic = False
multi_host = True
send_arp_for_ha = True
share_dhcp_address = True
force_dhcp_release = True
flat_network_bridge = br100
flat_interface = eth1
public_interface = eth0
#
#service nova-network restart ; service nova-api-metadata restart

##########Controller node
#source admin-openrc.sh

#nova network-create demo-net --bridge br100 --multi-host T \
--fixed-range-v4 10.1.0.0/24

Servislerin çalıştığından emin olmak için ;
#nova-manage service list

Dashboard

#apt-get install apache2 memcached libapache2-mod-wsgi openstack-dashboard
Eğer ubuntu temasını kaldırmak istiyorsanız ;
# apt-get remove --purge openstack-dashboard-ubuntu-theme

http://controller/horizon


OpenStack Icehouse Kurulumu Ubuntu 12.04 part 2

Kaldığımız yerden devam edelim ;)

Open stack services

#apt-get install python-pip

#apt-get install python-novaclient

Image services (Glance)
#apt-get install glance python-glanceclient
#nano /etc/glance/glance-api.conf
#nano /etc/glance/glanceregistry.conf
[database]
connection = mysql://glance:glance@Controller/glance
#
#nano /etc/glance/glance-api.conf
[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = Controller
rabbit_password = RABBIT_PASS

# rm /var/lib/glance/glance.sqlite
#mysql -u root -p
mysql> CREATE DATABASE glance;
mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
IDENTIFIED BY 'GLANCE_DBPASS';
mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY 'GLANCE_DBPASS';

# su -s /bin/sh -c "glance-manage db_sync" glance

#keystone user-create --name=glance --pass=glance \
--email=glance@example.com

#keystone user-role-add --user=glance --tenant=service --role=admin

#nano /etc/glance/glance-api.conf
#nano /etc/glance/glance-registry.conf

[keystone_authtoken]
auth_uri = http://Controller:5000
auth_host = Controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = GLANCE_PASS

[paste_deploy]
flavor = keystone

#keystone service-create --name=glance --type=image \
--description="OpenStack Image Service"

#keystone endpoint-create \
--service-iid=$(keystone service-list | awk '/ image / {print $2}') \
--publicurl=http://Controller:9292 \
--internalurl=http://Controller:9292 \

--adminurl=http://Controller:9292

#service glance-registry restart ; service glance-api restart
Image Service installation
#mkdir /tmp/images

#cd /tmp/images/
wget http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img
#glance image-create --name "cirros-0.3.2-x86_64" --disk-format qcow2 \
--container-format bare --is-public True --progress < cirros-0.3.2-x86_64-disk.img

#glance image-list

webden direkt olarak image yüklemek için.
#glance image-create --name="cirros-0.3.2-x86_64" --disk-format=qcow2 \
--container-format=bare --is-public=true \
--copy-from http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img

Install Compute controller services
#apt-get install nova-api nova-cert nova-conductor nova-consoleauth \

nova-novncproxy nova-scheduler python-novaclient

#nano /etc/nova/nova.conf
rpc_backend = rabbit
rabbit_host = Controller
rabbit_password = rabbit
connection = mysql://nova:nova@Controller/nova
my_ip = 10.0.0.11
vncserver_listen = 10.0.0.11
vncserver_proxyclient_address = 10.0.0.11

## rm /var/lib/nova/nova.sqlite
#mysql -u root -p
mysql> CREATE DATABASE nova;
mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';

# su -s /bin/sh -c "nova-manage db sync" nova
# keystone user-create --name=nova --pass=NOVA_PASS --email=nova@example.
com
#keystone user-role-add --user=nova --tenant=service --role=admin
#nano /etc/nova/nova.conf
[DEFAULT]
...
auth_strategy = keystone
auth_uri = http://Controller:5000
auth_host = Controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = NOVA_PASS
#
#nano /etc/nova/api-paste.ini
auth_uri = http://Controller:5000
auth_host = Controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = NOVA_PASS

#keystone service-create --name=nova --type=compute \
--description="OpenStack Compute"
#
#keystone endpoint-create \
--service-id=id=$(keystone service-list | awk '/ compute / {print $2}') \
--publicurl=http://Controller:8774/v2/%\(tenant_id\)s \
--internalurl=http://Controller:8774/v2/%\(tenant_id\)s \
--adminurl=http://Controller:8774/v2/%\(tenant_id\)s

#service nova-api restart ; service nova-cert restart ;service nova-consoleauth restart ; service nova-scheduler restart ; service nova-conductor restart ;service nova-novncproxy restart

#nova image-list


OpenStack Icehouse Kurulumu Ubuntu 12.04 part 1

OpenStack Icehouse Kurulumu Ubuntu 12.04

Yapı aşağıda ki gibi olacak ;
1 controller
1 compute

İşletim sistemi olarak Ubuntu 12.04 kullandım eğer isterseniz 14.04 te kullanabilirsiniz.
Controller ve compute olarak kullanacağımız bilgisayarlara varsayılan olarak yapmamız gereken ayarlar;
Sabit ip tanımlamak ve birbirlerinin isimlerini çözmelerini sağlamak.

Network Ayarları;
Controller ve Compute;

#nano /etc/hosts
# compute
10.0.0.31 compute
# controller
10.0.0.11 controller


Controller ;
#nano /etc/network/interface

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
        address 10.0.0.11
        netmask 255.255.255.0
        network 10.0.0.0
        broadcast 10.0.0.255
        gateway 10.0.0.1

# service networking stop && service networking start
#ping compute
Compute ;
#nano /etc/network/interface

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
        address 10.0.0.31
        netmask 255.255.255.0
        network 10.0.0.0
        broadcast 10.0.0.255
        gateway 10.0.0.1

# The external network interface
auto eth1
iface eth1 inet manual
                up ip link set dev $IFACE up
                down ip link set dev $IFACE down

# service networking stop && service networking start
#ping controller

Ping testi yapmayı unutmayın!

NTP Ayarları;
Controlller;
#apt-get install ntp

Compute;
#apt-get install ntp
#nano /etc/ntp.conf
server controller iburst
server 0.deb.pool.ntp.org

#service ntpd restart

Database kurulumu ;
Controller;
#apt-get install python-mysqldb mysql-server
#nano /etc/mysql/my.cnf
[mysqld]
...
bind-address = 10.0.0.0
default-storage-engine = innodb
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8

# mysql_install_db
# mysql_secure_installation

Compute;
#apt-get install python-mysqldb

Controller ve Compute node'u Icehouse'a güncelleme;
Bu ayarları sadece Ubuntu 12.04 üzerinde yapıyoruz , eğer 14.04 kullanıyorsanız gerek yok.
Controller ve Compute;
# apt-get install python-software-properties
# add-apt-repository cloud-archive:icehouse
# apt-get update
# apt-get dist-upgrade
# apt-get install linux-image-generic-lts-saucy linux-headers-generic-lts-saucy
# reboot

Messaging servisinin kurulumu;
Controller;
RABBIT_PASS = sizin belirleyeceğiniz bir parola.
# apt-get install rabbitmq-server
#rabbitmqctl change_password guest (RABBIT_PASS)

Identity Servisi kurulumu ve ayarları;
Controller;
# apt-get install keystone
#nano /etc/keystone/keystone.conf
[database]
# The SQLAlchemy connection string used to connect to the database
connection = mysql://keystone:KEYSTONE_DBPASS@controller/keystone

# rm /var/lib/keystone/keystone.db
#mysql -u root -p
mysql> CREATE DATABASE keystone;
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
mysql> exit

# su -s /bin/sh -c "keystone-manage db_sync" keystone

# openssl rand -hex 10 (Aşağıda ki gibi 10 haneli parola oluşturacaktır.)
db4429b71cd2b9b54d47

Yukarıda oluşturduğumuz parolayı aşağıda admin token da kullanıyoruz.
Ve logları nereye atacağını belirliyoruz.
#/etc/keystone/keystone.conf
[DEFAULT]
# A "shared secret" between keystone and other openstack services
admin_token = db4429b71cd2b9b54d47
log_dir = /var/log/keystone

# service keystone restart

Kullanıcı ve Servis kayıtlarını oluşturma;

export OS_SERVICE_TOKEN=db4429b71cd2b9b54d47
export OS_SERVICE_ENDPOINT=http://Controller:35357/v2.0

###Admin
#keystone user-create --name=admin --pass=admin1 --email=armagan.yaman@mail.com
#keystone role-create --name=admin
#keystone tenant-create --name=admin --description="Admin Tenant"
#keystone user-role-add --user=admin --tenant=admin --role=admin
#keystone user-role-add --user=admin --role=_member_ --tenant=admin

###User
#keystone user-create --name=demo --pass=demopass --email=demo@mail.com
#keystone tenant-create --name=demo --description="Demo Tenant"
#keystone user-role-add --user=demo --role=_member_ --tenant=demo

###service
#keystone tenant-create --name=service --description="Service Tenant"
#keystone service-create --name=keystone --type=identity \
--description="OpenStack Identity"

#keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ identity / {print $2}') \
--publicurl=http://Controller:5000/v2.0 \
--internalurl=http://Controller:5000/v2.0 \
--adminurl=http://Controller:35357/v2.0

####Identity Service
#unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
#keystone --os-username=admin --os-password=admin1 \
--os-auth-url=http://Controller:35357/v2.0 token-get
#keystone --os-username=admin --os-password=admin1 \
--os-tenant-name=admin --os-auth-url=http://Controller:35357/v2.0 \
token-get

Bir dosya oluşturun ve içine değişkenler tanımlayacağız;
#nano admin-openrc.sh
export OS_USERNAME=admin
export OS_PASSWORD=admin1
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://Controller:35357/v2.0

Yapılan işlemleri doğrulayalım.
#source admin-openrc.sh
#keystone token-get
#keystone user-list
#keystone user-role-list --user admin --tenant admin









10 Haziran 2014 Salı

Nfdump ,Nfsen installation and configuration

NFDUMP

Os: Debian wheezy 7.5
#apt-get install gcc flex librrd-dev make byacc flex autoconf
#cd  /opt
# wget http://sourceforge.net/projects/nfdump/files/stable/nfdump-1.6.12/nfdump-1.6.12.tar.gz/download
#tar xzvf nfdump-1.6.12.tar.gz
#cd /nfdump-1.6.12
# ./configure --enable-nfprofile --enable-nftrack --enable-sflow
#make
#make install

NFSEN

# apt-get install apache2 libapache2-mod-php5 php5-common libmailtools-perl rrdtool librrds-perl
#cd /opt
#wget http://heanet.dl.sourceforge.net/project/nfsen/stable/nfsen-1.3.6p1/nfsen-1.3.6p1.tar.gz
#tar xzvf nfsen-1.3.6p1.tar.gz
#cd nfsen-1.3.6p1/
#cp etc/nfsen-dist.conf /etc/nfsen.conf
# mkdir -p /data/nfsen
#nano /etc/nfsen.conf
[..]
$BASEDIR = "/data/nfsen";
[..]
$PREFIX  = '/usr/local/bin';#nfdump tools location
[..]
$USER    = "www-data";
[..]
$WWWUSER  = "www-data";
$WWWGROUP = "www-data";
[..]
%sources = (
 'for-Cisco' => {'port'=>'9995','col'=>'#0000ff','type'=>'netflow'},
 'for-Hp&Juniper' => {'port'=>'9996','col'=>'#0000ff','type'=>'sflow'},

);
[..]
$MAIL_FROM   = 'youraccount@yourdomain.ext';
$SMTP_SERVER = 'yoursmtphost.yourdomain.ext';
[..]

# perl -MCPAN -e 'install Socket6'
#which perl
/usr/bin/perl
# ./install.pl /etc/nfsen.conf
Script Ask a question about the perl location , 'which perl' is help you ;)
#cd /data/nfsen/bin/
#./nfsen start
For startup 
#ln -s /data/nfsen/bin/nfsen /etc/init.d/nfsen
#update-rc.d nfsen defaults 20
#ln -s /var/www/nfsen/nfsen.php /var/www/nfsen/index.php
Open browser and http://nfsen-nfdump-ip/nfsen/

If you have an error like this '“Frontend – Backend version missmatch!”'
http://sourceforge.net/p/nfsen/mailman/message/28748240/
or
#nano /var/www/nfsen/nfsen.php
// Session check
-if ( !array_key_exists('backend_version', $_SESSION ) || $_SESSION['backend_version'] !=  $expected_version ) {
+if ( array_key_exists('backend_version', $_SESSION ) && 
+$_SESSION['backend_version'] !=  $expected_version ) {
        session_destroy();
        session_start();
        $_SESSION['version'] = $expected_version;}
###
If you have an error about the 'service nfsen start/stop/reconfig'
Reconfiguring /usr/share/nfsen/bin/nfsen: Subroutine Lookup::pack_sockaddr_in6 redefined at /usr/share/perl5/Exporter.pm line 67.
at /usr/share/nfsen/libexec/Lookup.pm line 43
Subroutine Lookup::unpack_sockaddr_in6 redefined at /usr/share/perl5/Exporter.pm line 67.
at /usr/share/nfsen/libexec/Lookup.pm line 43
Subroutine Lookup::sockaddr_in6 redefined at /usr/share/perl5/Exporter.pm line 67.
at /usr/share/nfsen/libexec/Lookup.pm line 43
Subroutine AbuseWhois::pack_sockaddr_in6 redefined at /usr/share/perl5/Exporter.pm line 67.
at /usr/share/nfsen/libexec/AbuseWhois.pm line 42
Subroutine AbuseWhois::unpack_sockaddr_in6 redefined at /usr/share/perl5/Exporter.pm line 67.
at /usr/share/nfsen/libexec/AbuseWhois.pm line 42
Subroutine AbuseWhois::sockaddr_in6 redefined at /usr/share/perl5/Exporter.pm line 67.
at /usr/share/nfsen/libexec/AbuseWhois.pm line 42
Subroutine AbuseWhois::pack_sockaddr_in6 redefined at /usr/share/nfsen/libexec/AbuseWhois.pm line 44
Subroutine AbuseWhois::unpack_sockaddr_in6 redefined at /usr/share/nfsen/libexec/AbuseWhois.pm line 44
Subroutine AbuseWhois::sockaddr_in6 redefined at /usr/share/nfsen/libexec/AbuseWhois.pm line 44
###
/data/nfsen/libexec/AbuseWhois.pm
/data/nfsen/libexec/Lookup.pm
Change :
use Socket6;
with
Socket6->import(qw(pack_sockaddr_in6 unpack_sockaddr_in6 inet_pton getaddrinfo));

It will work fine ;)


19 Mart 2014 Çarşamba

Mysql database cluster and Nginx web server high availability and scalability part 3

Wordpress high availability and scalability part 3

Haproxy kurulumu;
Haproxy Debian wheezy reposunda bulunmamakta kurmak için backport reposunu ekliyoruz.
#vi /etc/apt/source.list
deb http://mirror.vorboss.net/debian/ wheezy-backports main
deb-src http://mirror.vorboss.net/debian/ wheezy-backports main
#apt-get update
#apt-get install haproxy
#nano /etc/default/haproxy
ENABLED=1
#mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
#vi /etc/haproxy/haproxy.cfg
global
    log 127.0.0.1 local0 notice
    maxconn 2000
    user haproxy
    group haproxy

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    retries 3
    option redispatch
    timeout connect  5000
    timeout client  10000
    timeout server  10000
listen appname 0.0.0.0:80
    mode http
    stats enable
    stats uri /haproxy?stats
    stats realm Strictly\ Private
    stats auth admin:P123456!
    balance roundrobin
    option httpclose
    option forwardfor
    server Nginx1 10.1.26.40:80 check
    server Nginx2 10.1.26.41:80 check

#service haproxy start
Keepalived kurulumu;
Kurulumun bu aşamasın da LB0 ve LB1 sanal bir interface ile birbirine bağlıyacağız , bu sanal interface dışarıdan gelen isteklere cevap verecek ve LBlardan herhangi biri down olsa dahi sistem çalışmaya devam edecek.
Her iki LB da yapacağımız
#apt-get install keepalived
#vi /etc/keepalived/keepalived.conf
LB1
vrrp_instance VI_1 {
interface eth0
state ASIL
virtual_router_id 10
priority 101   # 101 on asil, 100 on yedek
virtual_ipaddress {
10.1.26.45 #Sanal-ip-adres
}
}

LB2
vrrp_instance VI_1 {
interface eth0
state YEDEK
virtual_router_id 10
priority 100   # 101 on asil, 100 on yedek
virtual_ipaddress {
10.1.26.45 #Sanal-ip-adres
}
}
Her 2 LBda servisleri başlatıyoruz.
#service keepalived start
Asıl olanda aşağıdaki komutu çalıştırdığınız da ;
#ip addr Show eth0
Alacağınız çıktı;
Eth0:  mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
    link/ether 00:0c:29:6f:ed:60 brd ff:ff:ff:ff:ff:ff
    inet 10.1.26.43/24 brd 192.168.1.255 scope global eth0
    inet 10.1.26.45/32 scope global eth0
    inet6 fe80::20c:29ff:fe6f:ed60/64 scope link
       valid_lft forever preferred_lft forever
Domain adımızı public ipmize ,firewall tarafında ise sanal interface adresine yönlendirdiğimizde wordpress’in kurulum ekranı karşımıza çıkıyor.
Eğer sistemin çalışıp çalışmadığını test etmek istiyorsanız, aşağıdaki php kodu ile test edebilirsiniz veya http://haproxy-ipadres/haproxy?stats;  yazdığınızda kullanıcı adı ve parolayı girerek haproxy durumunu grafik ekrandan izleyebilirsiniz.
<?php         
header('Content-Type: text/plain');
echo "Server IP: ".$_SERVER['SERVER_ADDR'];
echo "\nClient IP: ".$_SERVER['REMOTE_ADDR'];
echo "\nX-Forwarded-for: ".$_SERVER['HTTP_X_FORWARDED_FOR'];
?>