Skip to content

fedyfausto/PuppetStack

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

________________________________________________________________________________

Prisma Puppet Project - Beginning Date: 10/11/2014
@Author: Gabriele Grisso, Federico Fausto Santoro, Alberto Di Savia Puglisi

This project's aim is to deploy a working openstack environment in one shot.
Of course it's a WORK IN PROGRESS and I really hope it will come to an end!
________________________________________________________________________________

CHANGELOG:
[19-03-2015] Switched to Puppet Master-Agent architecture. The standalone 
             architecture is not recommended for production use.
________________________________________________________________________________

DISCLAIMER:

[10-02-15]
I'm not using Ubuntu 12.04 any more. My boss told me to convert the project in 
order to be used under Centos 7 machines. I'll mantain those modules which 
have been originary developed for Ubuntu systems but the compatibility of newer  
modules and functionalities will be guaranteed only for Centos 7.
--------------------------------------------------------------------------------
I am using Ubuntu Server 12.04 LTS Precise. Modules should work even 
on other debian-based distros but I am absolutely not sure and I cannot
guarantee.
________________________________________________________________________________
  
What does work in Centos 7:
  > Basic puppet settings
  > Hiera and librarian-puppet
  > Galera cluster of MariaDB database servers (3 to 5 nodes)
  > Haproxy load balancers for the Galera cluster (2 to 3 nodes) 
  > HA of these load balancers through Keepalived
  > Openstack Users, DBs and Privileges entries
  > Rabbitmq cluster (3 to 5 nodes)
  > GlusterFS cluster (2 to 5 nodes)
  > Puppet Master-Agent architecture with Apache Passenger
  > Directory Environments   

What should work better:
  > Puppet Master should be in HA
  > ssh exec.pp should give the ssh id_rsa to a puppetmaster or 
    something similar and the id_rsa.pub to other nodes 
  > hiera variables are all in one single file
  > you tell me...  

Next steps:
  > Zabbix
  > Pacemaker and Corosync
  > OpenStack services
  > Keystone
  > ..
________________________________________________________________________________

PUPPET MASTER HOW-TO:

  1) Modify the /data/common.yaml file according to your needs.
  Here are some parameter to which you should pay attention:
    
    > galera_nodes -> the number of hosts the haproxy lb and the galera module 
      itself should consider (3, 4 or 5). 4 is the default value.

    > rabbit_nodes -> the number of hosts will form the rabbitmq cluster 
      (3, 4 or 5). 3 is the default value.

    > haproxy_nodes -> the number of haproxy nodes. Choose between 2 and 3.

    > hap1_priority, hap2_priority, hap3_priority -> the priority of each 
      load balancer. 100, 101 and 102 are default values and should be ok.
      
    > gluster_nodes -> the number of glusterfs nodes forming the cluster 
      (2 to 5).
  
    Remember that if you change the number of hosts (galera_nodes, 
    rabbit_nodes...), you should provide at least an equivalent number of 
    relative hostnames and IPs!
  
  2) Edit the /manifests/nodes.pp to meet your requirements:
     > change the hostnames if needed.
  
  3) After you've completed the previous stages, run this script in order to 
    install the dependencies, copy files to their working directories and
    install a production-ready web server (Apache Passenger) that will run
    as Puppet Master:
    
    sh /root/prisma/scripts/centos_install.sh
________________________________________________________________________________

PUPPET AGENTS HOW-TO:
  
  1) Update your /etc/hosts with at least these two lines:
    10.55.1.168 	puppet.ct.infn.it	puppet
    10.55.1.169 	puppet-agent.ct.infn.it	puppet-agent

    The first line regards the Puppet Master. You should modify the IP address
    and fqdn according to your setup. I recommend to do not change the 'puppet'
    hostname.
    The second line regards the particular (local) node you want to configure
    through the Puppet Master. Since you have already configured the PM, 
    I assume PM hostnames are set up, thus PM knows everything about your nodes.
    
  2) Install Puppet on puppet agent nodes. Refer to this guide for CentOS: 
    https://docs.puppetlabs.com/guides/install_puppet/install_el.html

  3) Run this command in order to request a certificate from the PM and let it
    apply the proper configuration:
  
    puppet agent -t --server puppet.ct.infn.it --environment test

    Change the fqdn of the PM according to your need and the environment name.
    Default project environment is 'test'. Production directory is present
    because Puppet requires it, but it is empty (for now).  
________________________________________________________________________________

HOW IT WORKS:

> Galera: in order to create a galera cluster you must firstly apply the 
  manifest of the node you chose to be the galera master. It creates the 
  cluster. Then you can apply the manifests of other 'slave' nodes.
  
> Keepalived: its decision about what server should take the virtual ip is 
  taken considering the sum between two factors: the priority and the weight 
  of each server. The weight is equal (by default) for each node but the
  priority is incremental (100 to 102 if there are 3 load balancers).
  Assuming they are only two, the server who gains the ip is the one that has
  the sum of 103 (priority 101 + weight 2) against the other that has sum 102
  (100+2). If the server earning the vip goes down, its weight becomes 0 thus
  his sum becomes 101 (101 + 0) and the less important load balancer (sum 102) 
  takes the vip. When the maior server comes back to life, his weight is 103 
  and it gains the vip again.

> GlusterFS: In order to create a glusterfs remote storage cluster you should 
  firstly apply the manifest of the 'normal' servers (including the 
  glusterfs::server module, gluster-2 and gluster-3 in this test environment).
  Then you can apply the manifest of the main server (including the 
  glusterfs::mainserver module, gluster-1 in this test environment). The main 
  server add the peers to the cluster, create the volume and start it. 
  Use the client module to mount the mount the GlusterFS volume.
  To test the status of your volume: gluster volume status
________________________________________________________________________________

TROUBLESHOOTING:
  
  HAProxy & Keepalived:
    > To try the VIP functionality in addition to the load balancing one, use 
      this: mysql -uhaproxy_root -ppassword -h 10.55.1.155 -e 
      "show variables like 'wsrep_node_name';"
      Remember to change the ip address in the command with your VIP.
    > To understand which Haproxy server keeps the VIP: ip a | grep eth0
    > You can run 2 or 3 HAProxy load balancers but remember to change the 
      haproxy_nodes value in prisma/data/common.yaml 
    > It seems there is a sort of incompatibility between SystemD and HAProxy 
      on CentOS. A command that forces the binding with the haproxy.cfg 
      configuration file is needed after the system boot. It is not needed 
      in Ubuntu. I managed to find a solution for this issue, automating 
      (with Puppet of course) the adding of a cronjob to the crontab. 
      It's not wonderful to see but it works. I hope to find a better solution.
      
  RabbitMQ cluster:
    > If, for any reason, you need to apply manifest of a rabbitmq node
      a second time, please exec the following command: killall -u rabbitmq
      (Ubuntu).
    > To verify the status of your cluster: 'rabbitmqctl cluster_status'
    > I used the official puppetlabs/rabbitmq module, adapting it according
      to my needs. Please refer to it for more options and configurations
      (https://forge.puppetlabs.com/puppetlabs/rabbitmq).
  
  GlusterFS cluster:
    > If the apply action of your mainserver fails you probably have a 
      network issue. You most likely should see the red error over the 
      'gluster peer probe' command. This command adds a node to the cluster 
      and if it fails, there is a network problem. If you try the command 
      (gluster peer probe NODE_IP) by yourself you should see something like:
      "peer probe: failed: Probe returned with unknown errno 107". 
      Be sure your nodes can communicate to each other and that you have
      NOT a firewall blocking the glusterfs ports. 
      If you are using a firewall (iptables or firewalld), please open these
      ports: TCP (24007:24047 - 111 - 38465:38467) UDP ( 111 ).
      To verify the status of your cluster: 'gluster volume status' 
      from any node.

--------------------------------------------------------------------------------

  Puppet Master:
    > If you need to execute the initial script a second time, you MUST firstly:
      1) stop the httpd service (systemctl stop httpd)
      2) clean all current certificates from master (puppet cert clean <fqdn>)
      3) remove the /var/lib/puppet/ssl directory from the agents
      4) remove the /etc/httpd/conf.d/puppetmaster.conf from the master
      5) remove all files from /etc/puppet/environments and /etc/puppet/modules
      6) remove also hiera.yaml, PuppetFile and the data folder from /etc/puppet
      7) git clone the repo and run the /scripts/centos_install.sh again

  Puppet Agent:
    > You can have problems attempting to receive the catalog if:
      1) You forgot to properly edit the /etc/hosts file
      2) You have old certificates. Remove /var/lib/puppet/ssl folder
      3) You changed the environment without updating the 
         /etc/puppet/puppet.conf with the correct 'environment' value
      4) The server's fqdn in your puppet agent string is wrong
      These problems are self-explicative. Correct them and everything will work 
________________________________________________________________________________

If you have any question or suggestion, 
please feel free to contact me at [email protected]

                               THAT'S ALL FOLKS

Releases

No releases published

Packages

No packages published

Languages

  • HTML 46.3%
  • JavaScript 33.8%
  • CSS 11.4%
  • Puppet 7.8%
  • Other 0.7%