kurulum etiketine sahip kayıtlar gösteriliyor. Tüm kayıtları göster
kurulum etiketine sahip kayıtlar gösteriliyor. Tüm kayıtları göster

21 Nisan 2015 Salı

Haproxy Transparent Mode on Centos 7

Haproxy Transparent Mode on Centos 7

 HAProxy can’t do transparent binding or proxying alone. It must stand on a compiled and tuned Linux Kernel and operating system.
But Centos 7 supported haproxy transparent mode.
Step by step configuration; 
1. sysctl settings
2. iptables rules
3. ip route rules
4. HAProxy configuration

Step 1 is Sysctl serttings;
 – net.ipv4.ip_forward
  – net.ipv4.ip_nonlocal_bind
# echo 1 > /proc/sys/net/ipv4/ip_forward
# echo 1 > /proc/sys/net/ipv4/ip_nonlocal_bind

Step 2 is iptables rules;
#iptables -F -t mangle
#iptables -F
#iptables -F -t nat
#iptables -t mangle -N DIVERT
#iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
#iptables -t mangle -A DIVERT -j MARK --set-mark 1

#iptables -t mangle -A DIVERT -j ACCEPT

Step 3 is ip route rules;
tell the Operating System to forward packets marked by iptables to the loopback where HAProxy can catch them:
#ip rule add fwmark 1 lookup 100

#ip route add local 0.0.0.0/0 dev lo table 100

Step 4 is haproxy configuration;
Finally, you can configure HAProxy.
  * Transparent binding can be configured like this:
frontend App_in
        bind ipofhaproxy:10421 transparent

        mode tcp

backend App_out
        mode tcp
        log global
        source 0.0.0.0 usesrc clientip
        balance roundrobin
        server backend1 ipofbackend01:10421 check
        server backend2 ipofbackend02:10421 check

Note: When you reboot the server ,ip rules will be delete.
Bash script will help you ;)
#!/bin/bash
iptables -F
iptables -F -t nat
iptables -t mangle -N DIVERT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT
ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100

25 Şubat 2015 Çarşamba

Exporting SSL certificates from Windows to Linux

Exporting SSL certificates from Windows to  Linux 

Step 1:
Exporting ssl cert. from iis , format must be .pfx.

Step 2:
#cd /etc/nginx/
#mkdir ssl
#cd ssl
#mv /path/to/pfx/sslbackup.pfx
#chmod 400 sslbackup.fpx

Step 3:
3.1:
Export public cert.
#openssl pkcs12 -in ./sslbackup.pfx -clcerts -nokeys -out public.crt
3.2:
Export key
#openssl pkcs12 -in ./sslbackup.pfx -nocerts -nodes -out private.rsa
3.3:
Test the cert..
#openssl s_server -www -accept 443 -cert ./public.crt -key ./private.rsa
!!permisson kontrol #chmod 400 /etc/nginx/ssl/*

Step 4:
#nano /etc/nginx/sites-available/example.org.conf
upstream exampleapp{
        server web-app-node1;
        server web-app-node2;
        }

server {
        listen 80;
        listen 443 ssl;
        server_name example.org;

        ssl on;
        ssl_certificate /etc/nginx/ssl/public.crt;
        ssl_certificate_key /etc/nginx/ssl/private.rsa;

        location / {
        proxy_pass http://exampleapp;
        proxy_set_header Host $host;
        proxy_set_header X-Ssl on;
        }
}
#service nginx configtest

#service nginx reload

9 Aralık 2014 Salı

Nginx, Django, Gunicorn & Mysql Installation and Configuration on Debian 7

Nginx, Django, Gunicorn & Mysql Installation and Configuration

Step One;
Update packages
#apt-get update
#apt-get upgrade

Step Two;
Install and create virtualenv
#apt-get install python-virtualenv python-dev
#source virtualenv /opt/myenv
Notice that a new directory "myenv" was created in the "/opt" directory. This is where our virtualenv will live. Make sure to replace "/opt/myenv" with the path to where you want your virtualenv installed. I typically put my env's in /opt, but this is strictly preference. Some people create a directory named "webapps" at the root of the VPS. Choose whatever method makes the most sense to you.

Step Three ;
Install Django
#source /opt/myenv/bin/activate
You should now see that "(myenv)" has been appended to the beginning of your terminal prompt. This will help you to know when your virtualenv is active and which virtualenv is active should you have multiple virtualenv's on the VPS.
With your virtualenv active, we can now install Django. To do this, we will use pip, a Python package manager much like easy_install. Here is the command you will run:
(myenv)root@Django:/opt/myenv/bin#pip install django
You now have Django installed in your virtualenv! Now let's get our database server going.

Step Four ;
Install Mysql Server
Since we don't need our virtualenv active for this part, run the following command to deactivate:
(myenv)root@Django:/opt/myenv/bin#deactivate
This will always deactivate whatever virtualenv is active currently. Now we need to install dependencies for Mysql to work with Django with this command:
#apt-get install python-mysqldb libmysqlclient-dev mysql-server


Step Five;
Install Nginx
#apt-get install nginx

Step Six;
Install Gunicorn
Gunicorn is a very powerful Python WSGI HTTP Server. Since it is a Python package we need to first activate our virtualenv to install it. Here is how we do that:
#source /opt/myenv/bin/activate
Make sure you see the added "myenv" at the beginning of your terminal prompt. With your virtualenv now active, run this command:
(myenv)root@Django:/opt/myenv/bin# pip install gunicorn
Gunicorn is now installed within your virtualenv.
If all you wanted was to get everything installed, feel free to stop here. Otherwise, please continue for instructions on how to configure everything to work together and make your app accessible to others on the web.
(myenv)root@Django:/opt/myenv/bin#deactivate

Step Seven;
Configure Mysql
#mysql -u root -p 
>CREATE DATABASE djangodb;
>CREATE USER 'django'@'localhost' IDENTIFIED BY 'passwd';
>GRANT ALL PRIVILEGES ON djangodb . * TO 'django'@'localhost';
>FLUSH PRIVILEGES;

Step Eight;
Create a Django project
In order to go any further we need a Django project to test with. This will allow us to see if what we are doing is working or not. Change directories into the directory of your virtualenv (in my case /opt/myenv) like so:
#cd /opt/myenv
Now make sure your virtualenv is active. If you're unsure then just run the following command to ensure you're activated:
#source /opt/myenv/bin/activate
With your virtualenv now active, run the following command to start a new Django project:
(myenv)root@Django:/opt/myenv/bin#django-admin.py startproject myproject
You should see a new directory called "myproject" inside your virtualenv directory. This is where our new Django project files live.
In order for Django to be able to talk to our database we need to install a backend for Mysql. Make sure your virtualenv is active and run the following command in order to do this:
(myenv)root@Django:/opt/myenv/bin# pip install mysql-python
Change directories into the new "myproject" directory and then into it's subdirectory which is also called "myproject" like this:
(myenv)root@Django:/opt/myenv/bin#cd /opt/myenv/myproject/myproject
Edit the settings.py file with your editor of choice:
(myenv)root@Django:/opt/myenv/myproject/myproject#nano settings.py
Find the database settings and edit them to look like this:
DATABASES = {
        'default': {
'ENGINE': 'django.db.backends.mysql', 
'NAME': 'djangodb',
'USER': 'django',
'PASSWORD': 'passwd',
'HOST': 'localhost',   # Or an IP Address that your DB is hosted on
'PORT': '3306',
  }
}
Save and exit the file. Now move up one directory so your in your main Django project directory 
#cd /opt/myenv/myproject
Activate your virtualenv if you haven't already with the following command:
#source /opt/myenv/bin/activate
With your virtualenv active, run the following command so that Django can add it's initial configuration and other tables to your database:
(myenv)root@Django:/opt/myenv/myproject/#python manage.py syncdb
You should see some output describing what tables were installed, followed by a prompt asking if you want to create a superuser. This is optional and depends on if you will be using Django's auth system or the Django admin.

Step Nine;
Configure Gunicorn
First lets just go over running Gunicorn with default settings. Here is the command to just run default 
#gunicorn_django --bind yourdomainorip.com:8001
Be sure to replace "yourdomainorip.com" with your domain, or the IP address of your VPS if you prefer. Now go to your web browser and visit yourdomainorip.com:8001 and see what you get. You should get the Django welcome screen.
If you look closely at the output from the above command however, you will notice only one Gunicorn worker booted. What if you are launching a large-scale application on a large VPS? Have no fear! All we need to do is modify the command a bit like so:
#gunicorn_django --workers=3 --bind yourdomainorip.com:8001
Now you will notice that 3 workers were booted instead of just 1 worker. You can change this number to whatever suits your needs.
Since we ran the command to start Gunicorn as root, Gunicorn is now running as root. What if you don't want that? Again, we can alter the command above slightly to accomodate:
#gunicorn_django --workers=3 --user=nobody --bind yourdomainorip.com:8001
If you want to set more options for Gunicorn, then it is best to set up a config file that you can call when running Gunicorn. This will result in a much shorter and easier to read/configure Gunicorn command.
You can place the configuration file for gunicorn anywhere you would like. For simplicity, we will place it in our virtualenv directory. Navigate to the directory of your virtualenv like so:
#cd /opt/myenv
Now open your config file with your preferred editor (nano is used in the example below):
#nano gunicorn_config.py
Add the following contents to the file:
command = '/opt/myenv/bin/gunicorn'
pythonpath = '/opt/myenv/myproject'
bind = '127.0.0.1:8001'
workers = 3
user = 'nobody'
Save and exit the file. What these options do is to set the path to the gunicorn binary, add your project directory to your Python path, set the domain and port to bind Gunicorn to, set the number of gunicorn workers and set the user Gunicorn will run as.
In order to run the server, this time we need a bit longer command. Enter the following command into your prompt:
#/opt/myenv/bin/gunicorn -c /opt/myenv/gunicorn_config.py myproject.wsg
You will notice that in the above command we pass the "-c" flag. This tells gunicorn that we have a config file we want to use, which we pass in just after the "-c" flag. Lastly, we pass in a Python dotted notation reference to our WSGI file so that Gunicorn knows where our WSGI file is.
Running Gunicorn this way requires that you either run Gunicorn in its own screen session (if you're familiar with using screen), or that you background the process by hitting "ctrl + z" and then typing "bg" and "enter" all right after running the Gunicorn command. This will background the process so it continues running even after your current session is closed. This also poses the problem of needing to manually start or restart Gunicorn should your VPS gets rebooted or were it to crash for some reason. To solve this problem, most people use supervisord to manage Gunicorn and start/restart it as needed. Installing and configuring supervisord has been covered in another article which can be found here.
Lastly, this is by no means an exhaustive list of configuration options for Gunicorn. Please read the Gunicorn documentation found at gunicorn.org for more on this topic.

Step Ten;
Configure Nginx
#service nginx restart
Since we are only setting NGINX to handle static files we need to first decide where our static files will be stored. Open your settings.py file for your Django project and edit the STATIC_ROOT line to look like this:
STATIC_ROOT = "/opt/myenv/static/" 
Add to /opt/myenv/myproject/myproject/settings.py
#nano /etc/nginx/sites-available/myproject
server {
        server_name yourdomainorip.com;

        access_log off;

        location /static/ {
            alias /opt/myenv/static/;
        }

        location / {
                proxy_pass http://127.0.0.1:8001;
                proxy_set_header X-Forwarded-Host $server_name;
                proxy_set_header X-Real-IP $remote_addr;
                add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
        }
    }
#cd /etc/nginx/sites-enabled
#ln -s ../sites-available/myproject
#rm default
#service nginx restart
And that's it! You now have Django installed and working with Mysql and your app is web accessible with NGINX serving static content and Gunicorn serving as your app server. If you have any questions or further advice, be sure to leave it in the comments section.








15 Eylül 2014 Pazartesi

Puppet kurulumu ve ayarları

Puppet
Puppet bir sistem otomasyon aracıdır.
Örnek vermek gerekirse 10 tane sunucuyu kurup yönetmek kolay ve zahmetsiz görülebilir, konfiurasyon dosyalarını tek tek düzenleyebilirsiniz .
Bu sayı artış göstermeye başladığı zaman bir süre sonra sorunlar ve zorluklar çekmeye başlayabilirsin Böyle durumlar da imdada  puppet yetişiyor ve bir çok külfetten kurtulmuş oluyorsunuz.

Kurulum için ihtiyaç listesi :))
VirtualBox,
Debian 7.6 netinstall,







Puppet master ve agent tarafında yapılması gerekenler;

Dns adlarını lokal de yaptığımız için hosts dosyası içine eklemek ve makina adları ile ping attığına emin olmak.
Puppet Master ;
#nano /etc/hosts
127.0.0.1       localhost
127.0.1.1       deb7.6  deb7
10.1.0.172      puppetagent

Puppet Agent;
#nano /etc/hosts
127.0.0.1       localhost
127.0.1.1       deb7.6  deb7
10.1.0.171      puppetmaster

Ntp kurmamız şart değil bu test için manual olarakta aynı nette ki ntp serverlardan eşitlersini fakat gerçek zamanlı olarak sisteminize entegre etmeniz gerekiyorsa bi time server şart ve master agent ilişkisin de zamanların birbirini tutması gerekli.

Puppet Master Kurulumu;

Puppet Master ;
# wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
#dpkg -i puppetlabs-release-trusty.deb
#apt-get update
#apt-get install puppetmaster-passenger

Peki neden puppet passenger kurduk onu bir anlatayım.
Passenger kurduğumuz da processler apache tarafından kontrol ediliyor yani apache çalışıyorsa puppet da çalışıyor demektir.

Sertifikaları silelim.
#rm -rf /var/lib/puppet/ssl
Puppetın temel konfigurasyonunu yapalım;
#/etc/puppet/puppet.conf
[main] logdir=/var/log/puppet vardir=/var/lib/puppet ssldir=/var/lib/puppet/ssl rundir=/var/run/puppet factpath=$vardir/lib/facter templatedir=$confdir/templates [master] # These are needed when the puppetmaster is run by passenger # and can safely be removed if webrick is used. ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY

Buna ek olarak "certname = puppet" kısmını da main'in altına yazmamız gerekli ve fqdn girmemiz gerekli main altına "dns_alt_names = puppet, puppetmaster".

Kaydet ve çık.

Şimdi ssl cert generate edelim .
#puppet master --verbose --no-daemonize
Aşağıda ki gibi bir output vermesi gerekli;
Info: Creating a new SSL key for ca
Info: Creating a new SSL certificate request for ca
Info: Certificate Request fingerprint (SHA256): EC:7D:ED:15:DE:E3:F1:49:1A:1B:9C:D8:04:F5:46:EF:B4:33:91:91:B6:5D:19:AC:21:D6:40:46:4A:50:5A:29
Notice: Signed certificate request for ca
...
Notice: Signed certificate request for puppet
Notice: Removing file Puppet::SSL::CertificateRequest puppet at '/var/lib/puppet/ssl/ca/requests/puppet.pem'
Notice: Removing file Puppet::SSL::CertificateRequest puppet at '/var/lib/puppet/ssl/certificate_requests/puppet.pem'
Notice: Starting Puppet master version 3.6.2
Eğer bakmak isterseniz şöyle bir sertifikaya .
#puppet cert list -all

Aşağıda ki şekil de bir dosya oluşturalım , bu dosya hostların kurulum ve ayarlarını nasıl olacağını belirlediğimiz yer.

Son olarak ;
#service apache2 restart

Puppet Agent;

# wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
#dpkg -i puppetlabs-release-trusty.deb
#apt-get update
#apt-get install puppet
Aşağıda açtığımız dosyada ki değeri "yes" olarak değiştiriyoruz.
#nano /etc/default/puppet
START = yes

Agentın ayarlarını yapalım.
#nano /etc/puppet/puppet.conf
Template ve master kısmını siliyoruz.
[Agent]
Server=puppetmaster
#service puppet start


Puppet master;
Agentdan gelen isteği onaylamak için puppet master da sertifikayı imzalıyoruz.

#puppet cert list
"puppetagent.local"(SHA256) B1:96:ED:1F:F7:1E:40:53:C1:D4:1B:3C:75:F4:7C:0B:A9:4C:1B:5D:95:2B:79:C0:08:DD:2B:F4:4A:36:EE:E3
#puppet cert sign puppetagent.local
sertifikayı kaldırmak için ;
#puppet cert clean "hostadı"

Puppet agenta geçip agent tarafını test etmek için ;
#puppet agent --test
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts in /var/lib/puppet/lib/facter/pe_version.rb
Info: Loading facts in /var/lib/puppet/lib/facter/puppet_vardir.rb
Info: Loading facts in /var/lib/puppet/lib/facter/root_home.rb
Info: Loading facts in /var/lib/puppet/lib/facter/facter_dot_d.rb
Info: Caching catalog for hostadı
Info: Applying configuration version '1407966707'
Sistem testi için ;
#nano /etc/puppet/manifests/site.pp
file {'/tmp/example-ip':                                            # resource type file and filename
  ensure  => present,                                               # make sure it exists
  mode    => 0644,                                                  # file permissions
  content => "Here is my Public IP Address: ${ipaddress_eth0}.\n",  # note the ipaddress_eth0 fact
}
Agent tarafını test ettiğimiz de;
#cat /tmp/example-ip
Here is my Public IP Address: 128.131.192.11.

9 Temmuz 2014 Çarşamba

OpenStack Icehouse Kurulumu Ubuntu 12.04 part 3

Configure Compute Node

Compute node ayarları;
Aşağıda ki paketleri indirirken Supermin evet dememiz gerekiyor.

#apt-get install nova-compute-kvm python-guestfs
""Supermin 'Yes'

#
#dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-$(uname -r)
Statoverride dosyasını oluşturalım açılışta yukarıda ki komut çalışması için.
# nano /etc/kernel/postinst.d/statoverride
#!/bin/sh
version="$1"
# passing the kernel version is required
[ -z "${version}" ] && exit 0
dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-${version}


# chmod +x /etc/kernel/postinst.d/statoverride

#nano /etc/nova/nova.conf
/etc/nova/nova.conf
[DEFAULT]
rpc_backend = rabbit
rabbit_host = Controller
rabbit_password = RABBIT_PASS
auth_strategy = keystone
my_ip = 10.0.0.31
vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 10.0.0.31
novncproxy_base_url = http://Controller:6080/vnc_auto.html
glance_host = Controller

[database]
# The SQLAlchemy connection string used to connect to the database
connection = mysql://nova:nova@Controller/nova
[keystone_authtoken]
auth_uri = http://Controller:5000
auth_host = Controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = NOVA_PASS


#nano /etc/nova/nova-compute.conf
[libvirt]
...
virt_type = qemu

#egrep -c '(vmx|svm)' /proc/cpuinfo
#rm /var/lib/nova/nova.sqlite
#service nova-compute restart

Network (Legacy)

###Controller Node
#nano  /etc/nova/nova.conf
[DEFAULT]
...
network_api_class = nova.network.api.API
security_group_api = nova
###
# service nova-api restart ; service nova-scheduler restart ; service nova-conductor restart

####Compute Node

#apt-get install nova-network nova-api-metadata

#
#nano /etc/nova/nova.conf
[DEFAULT]
...
network_api_class = nova.network.api.API
security_group_api = nova
firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
network_manager = nova.network.manager.FlatDHCPManager
network_size = 254
allow_same_net_traffic = False
multi_host = True
send_arp_for_ha = True
share_dhcp_address = True
force_dhcp_release = True
flat_network_bridge = br100
flat_interface = eth1
public_interface = eth0
#
#service nova-network restart ; service nova-api-metadata restart

##########Controller node
#source admin-openrc.sh

#nova network-create demo-net --bridge br100 --multi-host T \
--fixed-range-v4 10.1.0.0/24

Servislerin çalıştığından emin olmak için ;
#nova-manage service list

Dashboard

#apt-get install apache2 memcached libapache2-mod-wsgi openstack-dashboard
Eğer ubuntu temasını kaldırmak istiyorsanız ;
# apt-get remove --purge openstack-dashboard-ubuntu-theme

http://controller/horizon


OpenStack Icehouse Kurulumu Ubuntu 12.04 part 1

OpenStack Icehouse Kurulumu Ubuntu 12.04

Yapı aşağıda ki gibi olacak ;
1 controller
1 compute

İşletim sistemi olarak Ubuntu 12.04 kullandım eğer isterseniz 14.04 te kullanabilirsiniz.
Controller ve compute olarak kullanacağımız bilgisayarlara varsayılan olarak yapmamız gereken ayarlar;
Sabit ip tanımlamak ve birbirlerinin isimlerini çözmelerini sağlamak.

Network Ayarları;
Controller ve Compute;

#nano /etc/hosts
# compute
10.0.0.31 compute
# controller
10.0.0.11 controller


Controller ;
#nano /etc/network/interface

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
        address 10.0.0.11
        netmask 255.255.255.0
        network 10.0.0.0
        broadcast 10.0.0.255
        gateway 10.0.0.1

# service networking stop && service networking start
#ping compute
Compute ;
#nano /etc/network/interface

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
        address 10.0.0.31
        netmask 255.255.255.0
        network 10.0.0.0
        broadcast 10.0.0.255
        gateway 10.0.0.1

# The external network interface
auto eth1
iface eth1 inet manual
                up ip link set dev $IFACE up
                down ip link set dev $IFACE down

# service networking stop && service networking start
#ping controller

Ping testi yapmayı unutmayın!

NTP Ayarları;
Controlller;
#apt-get install ntp

Compute;
#apt-get install ntp
#nano /etc/ntp.conf
server controller iburst
server 0.deb.pool.ntp.org

#service ntpd restart

Database kurulumu ;
Controller;
#apt-get install python-mysqldb mysql-server
#nano /etc/mysql/my.cnf
[mysqld]
...
bind-address = 10.0.0.0
default-storage-engine = innodb
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8

# mysql_install_db
# mysql_secure_installation

Compute;
#apt-get install python-mysqldb

Controller ve Compute node'u Icehouse'a güncelleme;
Bu ayarları sadece Ubuntu 12.04 üzerinde yapıyoruz , eğer 14.04 kullanıyorsanız gerek yok.
Controller ve Compute;
# apt-get install python-software-properties
# add-apt-repository cloud-archive:icehouse
# apt-get update
# apt-get dist-upgrade
# apt-get install linux-image-generic-lts-saucy linux-headers-generic-lts-saucy
# reboot

Messaging servisinin kurulumu;
Controller;
RABBIT_PASS = sizin belirleyeceğiniz bir parola.
# apt-get install rabbitmq-server
#rabbitmqctl change_password guest (RABBIT_PASS)

Identity Servisi kurulumu ve ayarları;
Controller;
# apt-get install keystone
#nano /etc/keystone/keystone.conf
[database]
# The SQLAlchemy connection string used to connect to the database
connection = mysql://keystone:KEYSTONE_DBPASS@controller/keystone

# rm /var/lib/keystone/keystone.db
#mysql -u root -p
mysql> CREATE DATABASE keystone;
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
mysql> exit

# su -s /bin/sh -c "keystone-manage db_sync" keystone

# openssl rand -hex 10 (Aşağıda ki gibi 10 haneli parola oluşturacaktır.)
db4429b71cd2b9b54d47

Yukarıda oluşturduğumuz parolayı aşağıda admin token da kullanıyoruz.
Ve logları nereye atacağını belirliyoruz.
#/etc/keystone/keystone.conf
[DEFAULT]
# A "shared secret" between keystone and other openstack services
admin_token = db4429b71cd2b9b54d47
log_dir = /var/log/keystone

# service keystone restart

Kullanıcı ve Servis kayıtlarını oluşturma;

export OS_SERVICE_TOKEN=db4429b71cd2b9b54d47
export OS_SERVICE_ENDPOINT=http://Controller:35357/v2.0

###Admin
#keystone user-create --name=admin --pass=admin1 --email=armagan.yaman@mail.com
#keystone role-create --name=admin
#keystone tenant-create --name=admin --description="Admin Tenant"
#keystone user-role-add --user=admin --tenant=admin --role=admin
#keystone user-role-add --user=admin --role=_member_ --tenant=admin

###User
#keystone user-create --name=demo --pass=demopass --email=demo@mail.com
#keystone tenant-create --name=demo --description="Demo Tenant"
#keystone user-role-add --user=demo --role=_member_ --tenant=demo

###service
#keystone tenant-create --name=service --description="Service Tenant"
#keystone service-create --name=keystone --type=identity \
--description="OpenStack Identity"

#keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ identity / {print $2}') \
--publicurl=http://Controller:5000/v2.0 \
--internalurl=http://Controller:5000/v2.0 \
--adminurl=http://Controller:35357/v2.0

####Identity Service
#unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
#keystone --os-username=admin --os-password=admin1 \
--os-auth-url=http://Controller:35357/v2.0 token-get
#keystone --os-username=admin --os-password=admin1 \
--os-tenant-name=admin --os-auth-url=http://Controller:35357/v2.0 \
token-get

Bir dosya oluşturun ve içine değişkenler tanımlayacağız;
#nano admin-openrc.sh
export OS_USERNAME=admin
export OS_PASSWORD=admin1
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://Controller:35357/v2.0

Yapılan işlemleri doğrulayalım.
#source admin-openrc.sh
#keystone token-get
#keystone user-list
#keystone user-role-list --user admin --tenant admin