ubuntu etiketine sahip kayıtlar gösteriliyor. Tüm kayıtları göster
ubuntu etiketine sahip kayıtlar gösteriliyor. Tüm kayıtları göster

21 Aralık 2017 Perşembe

ActiveMQ Artemis Installation on Debian 9

This is a base installation of ActiveMQ Artemis.
I am checking cluster and fail over configuration of Artemis. When it is ready I will share on my blog.

OpenJDK of Zulu For JAVA:

$sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 0x219BD9C9
$sudo apt-add-repository 'deb http://repos.azulsystems.com/debian stable main'
$sudo apt-get update
$sudo apt-get install zulu-8

More about Zulu;
https://www.azul.com/

Artemis Installation :
$wget http://ftp.itu.edu.tr/Mirror/Apache/activemq/activemq-artemis/2.4.0/apache-artemis-2.4.0-bin.tar.gz
$ sudo tar -zxvf apache-artemis-2.4.0-bin.tar.gz -C /opt/
$cd /opt

The most commonly used artemis commands are:
    address     Address tools group (create|delete|update|show) (example ./artemis address create)
    browser     It will browse messages on an instance
    consumer    It will consume messages from an instance
    create      creates a new broker instance
    data        data tools group (print) (example ./artemis data print)
    help        Display help information
    mask        mask a password and print it out
    migrate1x   Migrates the configuration of a 1.x Artemis Broker
    producer    It will send messages to an instance
    queue       Queue tools group (create|delete|update|stat) (example ./artemis queue create)

sudo ./bin/artemis create /opt/broker-name
You can now start the broker by executing:

   "/opt/broker-name/bin/artemis" run

Or you can run the broker in the background using:

   "/opt/broker-name/bin/artemis-service" start

Base Configuration of Artemis:
If you want access the web-ui from any where ,you have to change as below of the "web bind" in the <broker>/etc/bootstrap.xml.

<web bind="http://0.0.0.0:8161" path="web">
       <app url="activemq-branding" war="activemq-branding.war"/>
       <app url="artemis-plugin" war="artemis-plugin.war"/>
       <app url="console" war="console.war"/>
   </web>
If you do not set the jolokia, you can not reach the stats of artemis.
Below configuration that is enough for your access..
Jolokia settings:
<broker>/etc/jolokia-access.xml

<allow-origin>*://<ipaddressorfqdnofartemis>*</allow-origin>

Create a Systemd File for Artemis:
$sudo nano /etc/systemd/system/artemis.service

#############################################
[Unit]
Description=Apache ActiveMQ Artemis
After=network-online.target

[Service]
Type=forking
WorkingDirectory=/opt/broker-name/bin
ExecStart=/opt/broker-name/bin/artemis-service start
ExecStop=/opt/broker-name/bin/artemis-service stop
Restart=on-abort
User=root
Group=root

[Install]
WantedBy=multi-user.target
################################################

$sudo systemctl daemon-reload

Now you can start via systemd as a be service of Artemis.
$sudo systemctl start artemis.service

You can reach the web admin panel:
http://ipaddressorfqdnofartemis:8161

3 Aralık 2015 Perşembe

Dtrace installation on Debian 7

When I install the dtrace via systemtap I had an error like this "could not load module build-3.2.0-4-amd64/driver/dtracedrv.ko: No such file or directory"

How I solved the problem;
$sudo apt-get install linux-headers-$(uname -r)

Dtrace installation;

$git clone "https://github.com/dtrace4linux/linux.git" dtrace
$sudo apt-get install linux-headers-$(uname -r)
$cd dtrace
$tools/get-deps.pl
$sudo make all 
$sudo make install
$sudo make load
$sudo dtrace -l

21 Nisan 2015 Salı

Haproxy Transparent Mode on Centos 7

Haproxy Transparent Mode on Centos 7

 HAProxy can’t do transparent binding or proxying alone. It must stand on a compiled and tuned Linux Kernel and operating system.
But Centos 7 supported haproxy transparent mode.
Step by step configuration; 
1. sysctl settings
2. iptables rules
3. ip route rules
4. HAProxy configuration

Step 1 is Sysctl serttings;
 – net.ipv4.ip_forward
  – net.ipv4.ip_nonlocal_bind
# echo 1 > /proc/sys/net/ipv4/ip_forward
# echo 1 > /proc/sys/net/ipv4/ip_nonlocal_bind

Step 2 is iptables rules;
#iptables -F -t mangle
#iptables -F
#iptables -F -t nat
#iptables -t mangle -N DIVERT
#iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
#iptables -t mangle -A DIVERT -j MARK --set-mark 1

#iptables -t mangle -A DIVERT -j ACCEPT

Step 3 is ip route rules;
tell the Operating System to forward packets marked by iptables to the loopback where HAProxy can catch them:
#ip rule add fwmark 1 lookup 100

#ip route add local 0.0.0.0/0 dev lo table 100

Step 4 is haproxy configuration;
Finally, you can configure HAProxy.
  * Transparent binding can be configured like this:
frontend App_in
        bind ipofhaproxy:10421 transparent

        mode tcp

backend App_out
        mode tcp
        log global
        source 0.0.0.0 usesrc clientip
        balance roundrobin
        server backend1 ipofbackend01:10421 check
        server backend2 ipofbackend02:10421 check

Note: When you reboot the server ,ip rules will be delete.
Bash script will help you ;)
#!/bin/bash
iptables -F
iptables -F -t nat
iptables -t mangle -N DIVERT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT
ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100

9 Temmuz 2014 Çarşamba

OpenStack Icehouse Kurulumu Ubuntu 12.04 part 3

Configure Compute Node

Compute node ayarları;
Aşağıda ki paketleri indirirken Supermin evet dememiz gerekiyor.

#apt-get install nova-compute-kvm python-guestfs
""Supermin 'Yes'

#
#dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-$(uname -r)
Statoverride dosyasını oluşturalım açılışta yukarıda ki komut çalışması için.
# nano /etc/kernel/postinst.d/statoverride
#!/bin/sh
version="$1"
# passing the kernel version is required
[ -z "${version}" ] && exit 0
dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-${version}


# chmod +x /etc/kernel/postinst.d/statoverride

#nano /etc/nova/nova.conf
/etc/nova/nova.conf
[DEFAULT]
rpc_backend = rabbit
rabbit_host = Controller
rabbit_password = RABBIT_PASS
auth_strategy = keystone
my_ip = 10.0.0.31
vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 10.0.0.31
novncproxy_base_url = http://Controller:6080/vnc_auto.html
glance_host = Controller

[database]
# The SQLAlchemy connection string used to connect to the database
connection = mysql://nova:nova@Controller/nova
[keystone_authtoken]
auth_uri = http://Controller:5000
auth_host = Controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = NOVA_PASS


#nano /etc/nova/nova-compute.conf
[libvirt]
...
virt_type = qemu

#egrep -c '(vmx|svm)' /proc/cpuinfo
#rm /var/lib/nova/nova.sqlite
#service nova-compute restart

Network (Legacy)

###Controller Node
#nano  /etc/nova/nova.conf
[DEFAULT]
...
network_api_class = nova.network.api.API
security_group_api = nova
###
# service nova-api restart ; service nova-scheduler restart ; service nova-conductor restart

####Compute Node

#apt-get install nova-network nova-api-metadata

#
#nano /etc/nova/nova.conf
[DEFAULT]
...
network_api_class = nova.network.api.API
security_group_api = nova
firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
network_manager = nova.network.manager.FlatDHCPManager
network_size = 254
allow_same_net_traffic = False
multi_host = True
send_arp_for_ha = True
share_dhcp_address = True
force_dhcp_release = True
flat_network_bridge = br100
flat_interface = eth1
public_interface = eth0
#
#service nova-network restart ; service nova-api-metadata restart

##########Controller node
#source admin-openrc.sh

#nova network-create demo-net --bridge br100 --multi-host T \
--fixed-range-v4 10.1.0.0/24

Servislerin çalıştığından emin olmak için ;
#nova-manage service list

Dashboard

#apt-get install apache2 memcached libapache2-mod-wsgi openstack-dashboard
Eğer ubuntu temasını kaldırmak istiyorsanız ;
# apt-get remove --purge openstack-dashboard-ubuntu-theme

http://controller/horizon


OpenStack Icehouse Kurulumu Ubuntu 12.04 part 2

Kaldığımız yerden devam edelim ;)

Open stack services

#apt-get install python-pip

#apt-get install python-novaclient

Image services (Glance)
#apt-get install glance python-glanceclient
#nano /etc/glance/glance-api.conf
#nano /etc/glance/glanceregistry.conf
[database]
connection = mysql://glance:glance@Controller/glance
#
#nano /etc/glance/glance-api.conf
[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = Controller
rabbit_password = RABBIT_PASS

# rm /var/lib/glance/glance.sqlite
#mysql -u root -p
mysql> CREATE DATABASE glance;
mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
IDENTIFIED BY 'GLANCE_DBPASS';
mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY 'GLANCE_DBPASS';

# su -s /bin/sh -c "glance-manage db_sync" glance

#keystone user-create --name=glance --pass=glance \
--email=glance@example.com

#keystone user-role-add --user=glance --tenant=service --role=admin

#nano /etc/glance/glance-api.conf
#nano /etc/glance/glance-registry.conf

[keystone_authtoken]
auth_uri = http://Controller:5000
auth_host = Controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = GLANCE_PASS

[paste_deploy]
flavor = keystone

#keystone service-create --name=glance --type=image \
--description="OpenStack Image Service"

#keystone endpoint-create \
--service-iid=$(keystone service-list | awk '/ image / {print $2}') \
--publicurl=http://Controller:9292 \
--internalurl=http://Controller:9292 \

--adminurl=http://Controller:9292

#service glance-registry restart ; service glance-api restart
Image Service installation
#mkdir /tmp/images

#cd /tmp/images/
wget http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img
#glance image-create --name "cirros-0.3.2-x86_64" --disk-format qcow2 \
--container-format bare --is-public True --progress < cirros-0.3.2-x86_64-disk.img

#glance image-list

webden direkt olarak image yüklemek için.
#glance image-create --name="cirros-0.3.2-x86_64" --disk-format=qcow2 \
--container-format=bare --is-public=true \
--copy-from http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img

Install Compute controller services
#apt-get install nova-api nova-cert nova-conductor nova-consoleauth \

nova-novncproxy nova-scheduler python-novaclient

#nano /etc/nova/nova.conf
rpc_backend = rabbit
rabbit_host = Controller
rabbit_password = rabbit
connection = mysql://nova:nova@Controller/nova
my_ip = 10.0.0.11
vncserver_listen = 10.0.0.11
vncserver_proxyclient_address = 10.0.0.11

## rm /var/lib/nova/nova.sqlite
#mysql -u root -p
mysql> CREATE DATABASE nova;
mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';

# su -s /bin/sh -c "nova-manage db sync" nova
# keystone user-create --name=nova --pass=NOVA_PASS --email=nova@example.
com
#keystone user-role-add --user=nova --tenant=service --role=admin
#nano /etc/nova/nova.conf
[DEFAULT]
...
auth_strategy = keystone
auth_uri = http://Controller:5000
auth_host = Controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = NOVA_PASS
#
#nano /etc/nova/api-paste.ini
auth_uri = http://Controller:5000
auth_host = Controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = NOVA_PASS

#keystone service-create --name=nova --type=compute \
--description="OpenStack Compute"
#
#keystone endpoint-create \
--service-id=id=$(keystone service-list | awk '/ compute / {print $2}') \
--publicurl=http://Controller:8774/v2/%\(tenant_id\)s \
--internalurl=http://Controller:8774/v2/%\(tenant_id\)s \
--adminurl=http://Controller:8774/v2/%\(tenant_id\)s

#service nova-api restart ; service nova-cert restart ;service nova-consoleauth restart ; service nova-scheduler restart ; service nova-conductor restart ;service nova-novncproxy restart

#nova image-list


OpenStack Icehouse Kurulumu Ubuntu 12.04 part 1

OpenStack Icehouse Kurulumu Ubuntu 12.04

Yapı aşağıda ki gibi olacak ;
1 controller
1 compute

İşletim sistemi olarak Ubuntu 12.04 kullandım eğer isterseniz 14.04 te kullanabilirsiniz.
Controller ve compute olarak kullanacağımız bilgisayarlara varsayılan olarak yapmamız gereken ayarlar;
Sabit ip tanımlamak ve birbirlerinin isimlerini çözmelerini sağlamak.

Network Ayarları;
Controller ve Compute;

#nano /etc/hosts
# compute
10.0.0.31 compute
# controller
10.0.0.11 controller


Controller ;
#nano /etc/network/interface

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
        address 10.0.0.11
        netmask 255.255.255.0
        network 10.0.0.0
        broadcast 10.0.0.255
        gateway 10.0.0.1

# service networking stop && service networking start
#ping compute
Compute ;
#nano /etc/network/interface

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
        address 10.0.0.31
        netmask 255.255.255.0
        network 10.0.0.0
        broadcast 10.0.0.255
        gateway 10.0.0.1

# The external network interface
auto eth1
iface eth1 inet manual
                up ip link set dev $IFACE up
                down ip link set dev $IFACE down

# service networking stop && service networking start
#ping controller

Ping testi yapmayı unutmayın!

NTP Ayarları;
Controlller;
#apt-get install ntp

Compute;
#apt-get install ntp
#nano /etc/ntp.conf
server controller iburst
server 0.deb.pool.ntp.org

#service ntpd restart

Database kurulumu ;
Controller;
#apt-get install python-mysqldb mysql-server
#nano /etc/mysql/my.cnf
[mysqld]
...
bind-address = 10.0.0.0
default-storage-engine = innodb
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8

# mysql_install_db
# mysql_secure_installation

Compute;
#apt-get install python-mysqldb

Controller ve Compute node'u Icehouse'a güncelleme;
Bu ayarları sadece Ubuntu 12.04 üzerinde yapıyoruz , eğer 14.04 kullanıyorsanız gerek yok.
Controller ve Compute;
# apt-get install python-software-properties
# add-apt-repository cloud-archive:icehouse
# apt-get update
# apt-get dist-upgrade
# apt-get install linux-image-generic-lts-saucy linux-headers-generic-lts-saucy
# reboot

Messaging servisinin kurulumu;
Controller;
RABBIT_PASS = sizin belirleyeceğiniz bir parola.
# apt-get install rabbitmq-server
#rabbitmqctl change_password guest (RABBIT_PASS)

Identity Servisi kurulumu ve ayarları;
Controller;
# apt-get install keystone
#nano /etc/keystone/keystone.conf
[database]
# The SQLAlchemy connection string used to connect to the database
connection = mysql://keystone:KEYSTONE_DBPASS@controller/keystone

# rm /var/lib/keystone/keystone.db
#mysql -u root -p
mysql> CREATE DATABASE keystone;
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
mysql> exit

# su -s /bin/sh -c "keystone-manage db_sync" keystone

# openssl rand -hex 10 (Aşağıda ki gibi 10 haneli parola oluşturacaktır.)
db4429b71cd2b9b54d47

Yukarıda oluşturduğumuz parolayı aşağıda admin token da kullanıyoruz.
Ve logları nereye atacağını belirliyoruz.
#/etc/keystone/keystone.conf
[DEFAULT]
# A "shared secret" between keystone and other openstack services
admin_token = db4429b71cd2b9b54d47
log_dir = /var/log/keystone

# service keystone restart

Kullanıcı ve Servis kayıtlarını oluşturma;

export OS_SERVICE_TOKEN=db4429b71cd2b9b54d47
export OS_SERVICE_ENDPOINT=http://Controller:35357/v2.0

###Admin
#keystone user-create --name=admin --pass=admin1 --email=armagan.yaman@mail.com
#keystone role-create --name=admin
#keystone tenant-create --name=admin --description="Admin Tenant"
#keystone user-role-add --user=admin --tenant=admin --role=admin
#keystone user-role-add --user=admin --role=_member_ --tenant=admin

###User
#keystone user-create --name=demo --pass=demopass --email=demo@mail.com
#keystone tenant-create --name=demo --description="Demo Tenant"
#keystone user-role-add --user=demo --role=_member_ --tenant=demo

###service
#keystone tenant-create --name=service --description="Service Tenant"
#keystone service-create --name=keystone --type=identity \
--description="OpenStack Identity"

#keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ identity / {print $2}') \
--publicurl=http://Controller:5000/v2.0 \
--internalurl=http://Controller:5000/v2.0 \
--adminurl=http://Controller:35357/v2.0

####Identity Service
#unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
#keystone --os-username=admin --os-password=admin1 \
--os-auth-url=http://Controller:35357/v2.0 token-get
#keystone --os-username=admin --os-password=admin1 \
--os-tenant-name=admin --os-auth-url=http://Controller:35357/v2.0 \
token-get

Bir dosya oluşturun ve içine değişkenler tanımlayacağız;
#nano admin-openrc.sh
export OS_USERNAME=admin
export OS_PASSWORD=admin1
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://Controller:35357/v2.0

Yapılan işlemleri doğrulayalım.
#source admin-openrc.sh
#keystone token-get
#keystone user-list
#keystone user-role-list --user admin --tenant admin