Ansible Tower
Modules
Plugins
Inventory
Playbook
- Yaml syntax
- indentation is important and tab doesn’t work
- hyphens are used for lists
- dictionaries are defined by dictionary_name: followed by a further indented key-value, list
- Tasks are module names. E.g. yum
- Version controlled and should be updated in VCS
- Description of system or application
- Very simple language
- Effectively self documents
- Conditional statements can be used
- variables (Jinja2 format) can be used
- Dictionary based variable definition
foo: field1: one field2: two
- Dictionary based reference (either works)
foo['field1'] foo.field1 {{ hostvars[host].ansible_default_ipv4.address }}
- standard definition
vars: http_port: 80
- standard reference in a template
vars: app_path: {{ base_path }}/22
- or if in a YAML file, quoted
vars: app_path: "{{ base_path }}/22"
Roles
- SmartOS VM Provisioning https://galaxy.ansible.com/precurse/smartos_provision/
- you are still allowed to list tasks, vars_files, and handlers “loose” in playbooks without using roles, but roles are a good organizational feature and are highly recommended. If there are loose things in the playbook, the roles are evaluated first.
Handlers
- handlers can be created with a “listen” topic, where multiple handlers use the same topic and anything that notifies on this topic name, actions all of these listening handlers in one go. It’s a way of grouping multiple handler events into one notify action (http://docs.ansible.com/ansible/playbooks_intro.html#handlers-running-operations-on-change)
- Notify handlers are always run in the same order they are defined, not in the order listed in the notify-statement. This is also the case for handlers using listen.
- Handler names and listen topics live in a global namespace.
- If two handler tasks have the same name, only one will run.
- You cannot notify a handler that is defined inside of an include. As of Ansible 2.1, this does work, however the include must be static.
- handlers notified within pre_tasks, tasks, and post_tasks sections are automatically flushed in the end of section where they were notified;
- handlers notified within roles section are automatically flushed in the end of tasks section, but before any tasks handlers.
- To flush handlers (to force them to be processed earlier than the end of the play. I.e. at the end of a block)
tasks: - shell: some tasks go here - meta: flush_handlers
- When a task fails on a host, handlers which were previously notified will not be run on that host. This can lead to cases where an unrelated failure can leave a host in an unexpected state. For example, a task could update a configuration file and notify a handler to restart some service. If a task later on in the same play fails, the service will not be restarted despite the configuration change. Use the
--force-handlers
command-line option or in configuration by includingforce_handlers: True
- when calling handlers with a task, the action can’t be debug or meta because this doesn’t then call the handler
Dependencies
- Role dependencies can also be installed from source control repos or tar files
Tasks
- blocks can be used to only run groups of tasks if a condition is met
Templates
- Templates are a way of using Jinja2 templating language to create a file based on conditions and loops defined in the template file
- Conditions in the template file would include things like
{% if (inventory_hostname in groups.dbservers) %} -A INPUT -p tcp ~pp~--~/pp~dport 3306 -j ACCEPT {% endif %}
- but only output the following if matched
-A INPUT -p tcp --dport 3306 -j ACCEPT
- the final file that is output from the template is copied into place as follows:
name: insert iptables template template: src=iptables.j2 dest=/etc/sysconfig/iptables when: ansible_distribution_major_version != '7' notify: restart iptables
- This could then be picked up by a handler as follows:
name: restart iptables service: name=iptables state=restarted
- Jinja2 provides a convenience that allows you to filter loop items easily as part of the for statement. Let’s repeat the previous example using this convenience: # data dirs
{% for dir in data_dirs if dir != "/" %} data_dir = {{ dir }} {% else %} # no data dirs found {% endfor %}
- Jinja can also access many Python functions. Like split which could be used to pick the first part of a command with multiple parameters:
{% if command is defined %} {% set command_exe = command.split() %} # The command we pass to Containerpilot to start the application RUN chmod ug+x {{ command_exe[0] }} CMD [ "{{ command }}" ] {% endif %}
Variables
- Facts are something statically assigned and don’t need to be run on remote host, variables are assigned using register: variable_name and usually the value is the result of running some command on the remote host--
- variable type conversion can be done using a pipe. E.g.
servers_number.stdout|int == 2
- Multi-line (array-like) variable assignment can be access with
command: /opt/smartdc/bin/sdcadm experimental update dockerlogger --servers {{ server_list.stdout_lines[0] }},{{ server_list.stdout_lines[1] }}
- Looping over these variables would be written as (item is not interchanged, it is literal):
- name: Setting up Docker Logger on additional node command: /opt/smartdc/bin/sdcadm experimental update dockerlogger --servers {{ item.0 }},{{ item.1 }} with_indexed_items: "{{ server_list.stdout_lines }}"
- Hash variables (dictionaries): in some advanced scenarios, it is desirable to replace just one bit of a hash or add to an existing hash rather than replacing the hash altogether. To unlock this ability, a configuration change is necessary in an Ansible config file. The config entry is hash_behavior, which takes one of replace, or merge. A setting of merge will instruct Ansible to merge or blend the values of two hashes when presented with an override scenario rather than the default of replace, which will completely replace the old variable data with the new data
It’s important to note that when using roles, the order of the precedence of variables depends upon the order of the roles. So if you have a key in defaults which has the same name as a key in vars of a role which runs before it, it will overwrite the previous role’s key value (even if it is in vars).
Blocks
- when conditions of blocks are appended to the tasks within that block and evaluating it from the tasks context
- it also seems like if the tasks already have a when condition, the append doesn’t happen and the task’s when overrides the block’s when entirely
Conditionals
- when using when to register a variable, even if the when condition isn’t matched, the variable is updated with the statement that the condition didn’t match and was skipped (this is a horrible behaviour). The way around this is to either use templates that include Jinja2 conditions, copy the final file to the server, then run the correct command based on these conditions. Alternatively, and more simply, two different handlers and
meta: flush_handlers
(to force processing before the end of the playbook) can be used. Like so:--- - block: - name: get list of running and setup servers when we need to use the headnode always_run: yes command: /bin/echo ignore_errors: yes notify: - get server list including headnode when: servers_number.stdout|int == 2 - name: get list of running and setup servers when we DO NOT need to use the headnode always_run: yes command: /bin/echo ignore_errors: yes notify: - get server list excluding headnode when: servers_number.stdout|int > 2 when: ( binder_insts.stdout|int < 2 ) and ( manatee_insts.stdout|int < 2 ) -- always: - meta: flush_handlers ...
- The two different handlers look like this:
--- - name: get server list including headnode command: /opt/smartdc/bin/sdc-server lookup setup=true status=running register: server_list ignore_errors: yes ... --- - name: get server list excluding headnode command: /opt/smartdc/bin/sdc-server lookup setup=true status=running headnode=false register: server_list ignore_errors: yes ...
- Inline conditionals The If statements can be used inside of inline expressions. This can be useful in some scenarios where additional newlines are not desired. Let’s construct a scenario where we need to define an API as either cinder or cinderv2:
API = cinder{{ 'v2' if api.v2 else '' }}
Communication
- Using ssh enables ControlPersist (a performance feature), Kerberos, and options in
~/.ssh/config
such as Jump Host setup. -
ansible_user: root
sets what user to connect as for a given set of hosts -
gather_facts: no
tells ansible to not go looking for Python and querying details from the host when connecting. - The SSH connection setting
pipelining=true
setting changes how modules are transported. Instead of opening an SSH connection to create a directory, another to write out the composed module, and a third to execute and clean up, Ansible will instead open an SSH connection on the remote host. Then, over that live connection, Ansible will pipe in the zipped composed module code and script for execution. This reduces the connections from three to one, which can really add up. By default, pipelining is disabled.
General
- Http://docs.ansible.com--
- Also manages virtual and physical clouds and networks
- Different much to terraform?
- Wins over Puppet and Chef: simplicity, self documenting and speed as a consequence.
- Security of running commands over ssh is managed by RBAC that defines exactly what the remote user can run. Also by sudoers.
- Ansible control host should be hardened
- Will run on Unix
- Ansible galaxy for community roles templates https://galaxy.ansible.com
- Scan jobs query hosts to collect “facts” about them. Used to compare host states over time
- Facts can be used as conditions on what to install
- Simplicity and self documenting nature could allow Ansible to be used as a service portfolio
- Ipfw/ipfd/firewalld instead of iptables
- has a whole host of connector scripts for dynamically updating inventories from different providers: https://github.com/ansible/ansible/tree/devel/contrib including from Zabbix (https://github.com/ansible/ansible/blob/devel/contrib/inventory/zabbix.py) and EC2
- Whitespace and indentation are important and tabs are not accepted as whitespace
- Python code seems to be able to be used in the form of plugins and Jinja2 templates
- Values can span multiple lines using
| or >
. Spanning multiple lines using a|
will include the newlines. Using a>
will ignore newlines; it’s used to make what would otherwise be a very long line easier to read and edit. In either case the indentation will be ignored - Locally, Ansible can be instructed to log its actions as well. An environment variable controls this, called ANSIBLE_LOG_PATH”
- When Ansible operates on a host, it will attempt to log the action to syslog (if verbosity level three or more is used). If this action is being done with a user with appropriate rights, it will cause a message to appear in the syslog file of the host. This message includes the module name and the arguments passed along to that command, which could include your secrets. To prevent this from happening, a play and task key exists named no_log.
Setup
http://docs.ansible.com/ansible/intro_installation.html
but with modifications:
gMac:~ gaz$ sudo -H pip install ansible gMac:~ gaz$ # or gMac:~ gaz$ sudo pip install git+git://github.com/ansible/ansible.git@stable-2.1 gMac:~ gaz$ mkdir -p /Users/gaz/Library/Caches/pip/http # may not be needed because the following command may resolve the issue
There are some permission issues on Mac which may be resolved by:
gMac:~ gaz$ sudo -H pip install --upgrade setuptools --user python gMac:~ gaz$ sudo -H pip install --upgrade ansible --user python
Compile error
Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/private/tmp/pip-build-zfrbst/pycrypto/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec((code, <strong>file</strong>, 'exec'|code, __file__, 'exec'))" install --record /tmp/pip-IDWHMQ-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/tmp/pip-build-zfrbst/pycrypto/
See https://github.com/ansible/ansible/issues/12354
Is fixed by:
gMac:~ gaz$ xcode-select --install xcode-select: note: install requested for command line developer tools
setuptools issue when trying to install ansible-container, complaining of:
File "/private/tmp/pip-build-YobzbY/ansible-container/setup.py", line 11, in packages=find_packages(include='container.*'), TypeError: find_packages() got an unexpected keyword argument 'include'Fixed this by doing a
brew updatefollowed by ~/pp~brew install python~/pp~
basic config
gMac:ansible gaz$ gMac:ansible gaz$ sudo cat /etc/ansible/group_vars/gbhome_all/ssh_options.yml --- ansible_user: root ... gMac:~ gaz$ cat /Users/gaz/.ansible.cfg [defaults] [ssh_connection] scp_if_ssh=True gMac:~ gaz$ gMac:ansible root# mkdir -p roles/{common,triton_headnode,triton_computenode}/{tasks,handlers,templates,files,vars,defaults,meta} gMac:ansible root# mkdir {library,filter_plugins} gMac:ansible root# gMac:ansible gaz$ tree ~pp~. . ??? filter_plugins ??? group_vars ? ??? gbhome_all ? ? ??? ansible_options.yml ? ? ??? ssh_options.yml ? ??? gbhome_computenodes ? ??? gbhome_headnodes ??? host_vars ??? hosts ??? library ??? roles ? ??? common ? ? ??? defaults ? ? ??? files ? ? ??? handlers ? ? ??? meta ? ? ??? tasks ? ? ??? templates ? ? ??? vars ? ??? smartos_host ? ? ??? defaults ? ? ??? files ? ? ??? handlers ? ? ??? meta ? ? ??? tasks ? ? ? ??? install_pkgsrc.yml ? ? ? ??? install_python.yml ? ? ? ??? main.yml ? ? ??? templates ? ? ??? vars ? ? ??? ansible_options.yml ? ? ??? main.yml ? ? ??? pkgsrc.yml ? ? ??? ssh_options.yml ? ??? triton_computenode ? ? ??? defaults ? ? ??? files ? ? ??? handlers ? ? ??? meta ? ? ? ??? main.yml ? ? ??? tasks ? ? ??? templates ? ? ??? vars ? ??? triton_headnode ? ??? defaults ? ??? files ? ??? handlers ? ??? meta ? ? ??? main.yml ? ??? tasks ? ? ??? main.yml ? ??? templates ? ??? vars ??? triton_headnode_setup.yml~/pp~ 40 directories, 14 files
As of version 2.0, Ansible uses a few more file handles to manage its forks. Mac OS X by default is configured for a small amount of file handles, so if you want to use 15 or more forks you’ll need to raise the ulimit with sudo launchctl limit maxfiles unlimited. This command can also fix any “Too many open files” error.
Docker
- Docker module also require python library installed
- https://docs.ansible.com/ansible/guide_docker.html
sudo -H pip install docker-py sudo -H pip install docker-compose
- seems to have some peculiarities (maybe a bug) when trying to install an image that has multiple versions found
- Triton interprets
mem: 256M
and look for an appropriate package that matches the memory size tls_hostname
needs to be the Docker API remote address- Ansible handles the provision of containers behind the scenes, so to get logs from a container either use a different log driver and ship container logs to a log server or use docker logs to grab them (see cassandra_integration_test/tasks/post_setup_output.yml in initial commit - f5deb2c50de2adbed6e4b4475ed0f092462cef1d f5deb2c
command: /usr/local/bin/docker logs "{{ cassandra_integration_test_results.ansible_facts.ansible_docker_container.Id }}"
- When using docker_service, Ansible variables can’t be passed to a remote docker-compose.yml file. So if variables must be passed, the whole docker-compose file needs to be passed in the docker_service, definition: option. This could well be solved by using Jinja2 templates to autogenerate the file as part of the playbook
- using definition: and something like
command: /usr/local/bin/docker logs {{ app_name }}
is essential for capturing logs to std_out when using docker_service because multiple services will usually be being used. - Triton currently only accepts, docker-compose
network_mode: bridge
but so far I’ve found connectivity issues when using this so reverted back to docker-compose v1 format- have since started implementing v2
- Running docker-compose CLI commands don’t seem to work after using Ansible to start containers. Even when specifying the docker-compose.yml file
- This is because of specifying
{{ container_name }}
. It seems that all names must be dynamic so as orchestration (e.g. scaling) can be implemented. Could be to do with{{ project_name }}
- This is because of specifying
- even when not specifying a container_name, a prefix will be assigned by docker-compose based on the directory name (or
{{ project_src }}
) where docker-compose.yml file (et al) is located. Therefore, putting them inroles/files/consul
is advised - when using affinity in a playbook it must be single and double quoted:
labels: com.docker.swarm.affinities: '["container==spark-master"]'
Error retrieving container list: ('Connection aborted.', BadStatusLine(\"''\",))"}
is most likely due to a TLS issue. In our case it was because environment variables were being set with IP address and certificate was created with *.my.sdc-docker CN. environment variable connection strings need to match certificate CNs"msg": "SSL Exception: hostname '192.168.78.5' doesn't match u'*.sdc-docker'"
was because the environment variable for docker_host was set to use an IP address but the certificate contained a name
Vault
set vault_password_file = ~/.ansible-vault.cfg in ~/.ansible.cfg
follow https://gist.github.com/tristanfisher/e5a306144a637dc739e7 and http://docs.ansible.com/ansible/playbooks_vault.html for usage
To enable basic logging on the control machine see Configuration file document and set thelog_pathconfiguration file setting.
- Occasionally you’ll encounter a device that doesn’t support SFTP. This is rare, but should it occur, you can switch to SCP mode in Configuration file.
- The group_vars/ and host_vars/ directories can exist in the playbook directory OR the inventory (/etc/ansible/) directory. If both paths exist, variables in the playbook directory will override variables set in the inventory directory.
- how managed hosts are connected to can be modified in a number of ways and stored in variables: http://docs.ansible.com/ansible/intro_inventory.html#list-of-behavioral-inventory-parameters
- Can be used to configure Docker containers: http://docs.ansible.com/ansible/intro_inventory.html#non-ssh-connection-types
always_run
(check_mode for > V2.2) has an unexpected behaviour:- name: copy public key copy: always_run: yes src: /Users/gaz/.ssh/macRSA.pub dest: /usbkey/config.inc/authorized_keys
- runs successfully but doesn’t copy the file. Task needs to be:
- name: copy public key always_run: yes copy: src: /Users/gaz/.ssh/macRSA.pub dest: /usbkey/config.inc/authorized_keys
- NOTE:
always_run
should be on the same indent line as name and copy - Python libraries can be found in: /Library/Python/2.7/site-packages/ansible/
- Common functions in: /Library/Python/2.7/site-packages/ansible/module_utils/docker_common.py
- ansible complains about minimum version not being met even though the version installed is higher
Examples