<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Wannes Van Causbroeck, Author at Cloudar</title>
	<atom:link href="https://cloudar.be/author/wannes/feed/" rel="self" type="application/rss+xml" />
	<link>https://cloudar.be/author/wannes/</link>
	<description>100% Focus On AWS // 100% Customer Obsession</description>
	<lastBuildDate>Sat, 26 Sep 2015 21:27:15 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Test kitchen opsworks workflow</title>
		<link>https://cloudar.be/awsblog/test-kitchen-opsworks-workflow/</link>
					<comments>https://cloudar.be/awsblog/test-kitchen-opsworks-workflow/#respond</comments>
		
		<dc:creator><![CDATA[Wannes Van Causbroeck]]></dc:creator>
		<pubDate>Sat, 26 Sep 2015 21:27:15 +0000</pubDate>
				<category><![CDATA[AWS Blog]]></category>
		<guid isPermaLink="false">https://cloudar.be/?p=3151</guid>

					<description><![CDATA[<p>As testing new cookbooks directly in opsworks takes a long time, it makes a lot of sense to test them in vagrant first. There is some documentation available on how to do this, but some of it is out of date, broken or unclear. I&#8217;ve tried to cobble something together that&#8217;s quick to set up [&#8230;]</p>
<p>The post <a href="https://cloudar.be/awsblog/test-kitchen-opsworks-workflow/">Test kitchen opsworks workflow</a> appeared first on <a href="https://cloudar.be">Cloudar</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>As testing new cookbooks directly in opsworks takes a long time, it makes a lot of sense to test them in vagrant first. There is some documentation available on how to do this, but some of it is out of date, broken or unclear.</p>
<p>I&#8217;ve tried to cobble something together that&#8217;s quick to set up and works good enough (aka: not 100% identical to opsworks, but close enough)</p>
<p>To mimic an opsworks environment in vagrant, we need several tools:</p>
<ul>
<li>vagrant</li>
<li>test kitchen</li>
<li>berkshelf</li>
<li>chef</li>
</ul>
<p>Below we describe one way to create and test a new cookbook. There is another way (by using the opsworks provisioner) which we&#8217;ll use later. For now, the standard chef_zero provisioner works well enough.<br />
The next setup allows us to test one single cookbook. If you need to test several cookbooks at the same time, things have to change slightly. More about this later.</p>
<h2></h2>
<h2>Creating a project</h2>
<pre><code>
mkdir [project name]
cd !$
mkdir cookbooks 
berks cookbook [cookbook name]  ## create the skeleton for a new cookbook

</code></pre>
<p>&nbsp;</p>
<p>Modify the <em>.kitchen.yml</em> file:</p>
<pre><code>
---
driver:
  name: vagrant
  require_chef_omnibus: 11.10.4  ## opsworks still uses this version of chef
provisioner:
  name: chef_zero  ## chef_solo is deprecated
  environments_path: ./environments  ## this folder contains the json file that mimics the opsworks environment data
platforms:
  - name: centos-6.7  ## most similar to the current amazon linux
suites:
  - name: testsuite
    run_list:
      - recipe[cookbook name::recipe]
    attributes:  ## standard way to provide attributes to cookbooks, but it&#039;s better to use the environments functionality, that way we can just copy and paste the json data from opsworks to vagrant
    tester:
      test: &quot;hello&quot;
   provisioner:  ## Opsworks style attributes
     client_rb:
       environment: test ## the name of the actual file containing our json data
</code></pre>
<p>&nbsp;</p>
<p>To obtain the opsworks attributes, create an opsworks stack + host, log in to that host, and run:</p>
<pre><code>
sudo opsworks-agent-cli get_json
</code></pre>
<p>&nbsp;</p>
<p>This command will provide you the full list of attributes. It&#8217;s better to keep only the attributes we need and transfer those over to our environments file. However, the cool thing about opsworks is all hosts can access all data about the stack from these attributes, so it&#8217;s worth checking them out in detail to see what&#8217;s available.<br />
The example below contains the bare minumum to get things working.</p>
<pre><code>
{
  &quot;default_attributes&quot;: {
    &quot;opsworks&quot; : {
      &quot;stack&quot; : {
        &quot;name&quot; : &quot;MyStack&quot;,
        &quot;id&quot; : &quot;42dfd151-6766-4f1c-9940-ba79e5220b58&quot;
      }
    }
  },
  ## custom attributes have to be added here and can be accessed with node[&#039;custom attribute&#039;]
  ## in opsworks you can just specify, in the custom json field, { &#039;custom attribute&#039;: &#039;value&#039; }
  &quot;chef_type&quot; : &quot;environment&quot;,
  &quot;json_class&quot; : &quot;Chef::Environment&quot;
}
</code></pre>
<p>&nbsp;</p>
<p>Next, if our module has dependencies to other modules, we need to set up our berksfile.</p>
<pre><code>
source &quot;https://supermarket.chef.io&quot;
 
metadata

cookbook &#039;supermarket_cookbook&#039;
</code></pre>
<p>&nbsp;</p>
<p>You also need to add the dependencies in the metadata.rb file</p>
<pre><code>
name             &#039;cookbook_name&#039;
maintainer       &#039;YOUR_COMPANY_NAME&#039;
maintainer_email &#039;YOUR_EMAIL&#039;
license          &#039;All rights reserved&#039;
description      &#039;Installs/Configures my cookbook&#039;
long_description IO.read(File.join(File.dirname(__FILE__), &#039;README.md&#039;))
version          &#039;0.1.0&#039;
 
depends &#039;supermarket_cookbook&#039;
</code></pre>
<h2></h2>
<h2>Test</h2>
<p>We didn&#8217;t actually specify any real tests in this example (check out the test kitchen manual on how to set this up), but we can provision our test machine with the following command:</p>
<pre><code>
kitchen converge
</code></pre>
<p>&nbsp;</p>
<p>Once the machine has converged (or not), we can log into the instance to do some manual debugging with:</p>
<pre><code>
kitchen login
</code></pre>
<p>&nbsp;</p>
<p>Just as a sidenote: if you want to debug on opsworks, all interesting files are under <em>/opt/aws/opsworks</em>. Even better: you can modifie your cookbooks directly under <em>/opt/aws/opsworks/current/site-cookbooks</em> and run the opsworks-agent-cli with the correct options to rerun chef with your modified cookbooks.</p>
<h2></h2>
<h2>Testing multiple cookbooks at once</h2>
<p>There is an important difference between how berksfiles are interpreted by opsworks. You need a different Berksfile and metadata per environment<br />
As a reminder, this is our directory layout:</p>
<pre><code>
.
└── project
    ├── Berksfile   # testkitchen
    ├── .kitchen.yml   #  testkitchen
    └── cookbooks
        ├── Berksfile   # opsworks
        ├── cookbook-a
        │   ├── Berksfile.in   # testkitchen
        │   └── metadata.rb   # opsworks + testkitchen
        └── cookbook-b
            ├── Berksfile.in   # testkitchen
            └── metadata.rb   # opsworks + testkitchen
</code></pre>
<p>&nbsp;</p>
<p>One special remark: Opsworks needs a git repo with only the Berksfile and the two cookbook directories, so take care about how you commit to git!<br />
The main Berksfile for testkitchen is the one under /project/. This calls the Berksfile.in in every cookbook.<br />
<em> I found this code somewhere on the internet, but can&#8217;t find the link anymore, so my apologies about not giving credit where credit is due!</em></p>
<pre><code>
source &quot;https://supermarket.chef.io&quot;
 
# Note the absence of the metadata line.
 
def dependencies(path)
  berks = &quot;#{path}/Berksfile.in&quot;
  instance_eval(File.read(berks)) if File.exists?(berks)
end
Dir.glob(&#039;./cookbooks/*&#039;).each do |path|
  dependencies path
  cookbook File.basename(path), :path =&gt; path
end
</code></pre>
<p>&nbsp;</p>
<p>The sub Berksfiles only contain the needed cookbooks for that specific cookbook. The metadata.rb file is automatically sourced by the cookbook</p>
<pre><code>
cookbook &quot;dependency-a&quot;
</code></pre>
<pre><code>
cookbook &quot;dependency-b&quot;
</code></pre>
<p>&nbsp;</p>
<p>Opsworks needs the following file. Again no metadata is referenced as opsworks will find the necessary files automatically.</p>
<pre><code>
source &quot;https://supermarket.chef.io&quot;
 
cookbook &quot;dependency-a&quot;
cookbook &quot;dependency-b&quot;
</code></pre>
<p>&nbsp;</p>
<p>This is the &#8216;normal&#8217; Berksfile for cookbook-a. You still need this if you want to test it independently!</p>
<pre><code>
source &quot;https://supermarket.chef.io&quot;
 
metadata
 
cookbook &quot;dependency-a&quot;
</code></pre>
<p>&nbsp;</p>
<h2>References</h2>
<p>http://pixelcog.com/blog/2014/virtualizing-aws-opsworks-with-vagrant/ (this uses the opsworks provisioner, didn&#8217;t get it working yet)<br />
http://enriquecordero.com/programming/opsworks-chef-workflow/<br />
http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-opsworks-mock.html (outdated)<br />
https://www.youtube.com/watch?v=0sPuAb6nB2o (very interesting test kitchen / berkshelf tutorial)</p>
<p>The post <a href="https://cloudar.be/awsblog/test-kitchen-opsworks-workflow/">Test kitchen opsworks workflow</a> appeared first on <a href="https://cloudar.be">Cloudar</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://cloudar.be/awsblog/test-kitchen-opsworks-workflow/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Extend ec2 boot disk</title>
		<link>https://cloudar.be/awsblog/extend-ec2-boot-disk/</link>
					<comments>https://cloudar.be/awsblog/extend-ec2-boot-disk/#respond</comments>
		
		<dc:creator><![CDATA[Wannes Van Causbroeck]]></dc:creator>
		<pubDate>Tue, 07 Jul 2015 10:16:40 +0000</pubDate>
				<category><![CDATA[AWS Blog]]></category>
		<guid isPermaLink="false">https://cloudar.be/?p=3072</guid>

					<description><![CDATA[<p>As most aws users know by now, amazon doesn&#8217;t make it particularly easy to extend boot volumes. The centos images don&#8217;t use LVM and the EBS stores are not extendable. The aws recommended way of extending a disk is: stop the instance unmap the volume create a snapshot of the volume create a bigger volume [&#8230;]</p>
<p>The post <a href="https://cloudar.be/awsblog/extend-ec2-boot-disk/">Extend ec2 boot disk</a> appeared first on <a href="https://cloudar.be">Cloudar</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>As most aws users know by now, amazon doesn&#8217;t make it particularly easy to extend boot volumes. The centos images don&#8217;t use LVM and the EBS stores are not extendable.<br />
The <a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html">aws recommended way</a> of extending a disk is:</p>
<ul>
<li>stop the instance</li>
<li>unmap the volume</li>
<li>create a snapshot of the volume</li>
<li>create a bigger volume of the snapshot</li>
<li>mount the new volume on another stopped instance (in the same AZ! Otherwise we can&#8217;t map the volume)</li>
<li>start that instance</li>
<li>delete and recreate the partition table to fit to the size of the disk + resize the filesystem</li>
<p>(the two above steps are needed when using parted. Fdisk can handle modifying the root volume in situ, but i couldn&#8217;t get it to play nice through scripting</li>
<li>stop the instance</li>
<li>move the volume to the original instance</li>
<li>start that instance</li>
</ul>
<p>pfew! To avoid doing this manually and getting bored to death I wrote a small script to handle this for me. It&#8217;s not the cleanest and most flexible script in the world (end could be improved no end), but it gets the job done.</p>
<pre><code>
#!/bin/bash
#
# ATTENTION: this only works when root login is allowed on your intermediate host. Preferably use centos 6
#
# This script uses an intermediate host to resize the disk + you need an ssh tunnel to that machine through a nat host. 
# This way we can also reach machines in a private subnet
# ex: ssh -i  -L 2222::22 
#
# Fill in the following variables
# and DON&#039;T forget to make sure grub boots the correct disk by changing root=/dev/xvda1 in /boot/grub/menu.lst on the intermediate host. Booting the wrong disk is caused
# by the fact that by default it mentions the UUID of the volume, and aws uses the same image for every instance of a specific ami.
#
# The script is called with two parameters:
# - the instance_id you want to resize
# - the new disk size in GB

# temporary host to use
temp_instance=
# temporary device name as called in the web gui
temp_device_aws=/dev/sdf # doesn&#039;t really matter, but this one works. I&#039;ve had issues with higher letters
# temporary device name as known by the OS
temp_device_host=/dev/xvdf # has to be in accordance with the $temp_device
# identity file to log into temphost
pemfile=
#username to log into temp instance
temp_username=root # sorry, won&#039;t work otherwise. You could adapt all commands to use sudo, but some commands don&#039;t work with it
# aws profile to use
profile=
# local port for ssh tunnel
tunnel_port=2222 # use the port you opened up for the ssh tunnel

source_instance=$1
new_size=$2
[[ $1 == &#039;&#039; ]] || [[ $2 == &#039;&#039; ]] &amp;# for ease of use, let&#039;s put all our actions into functions

function get_instance_name {
  instance_id=$1
  aws --profile $profile ec2 describe-instances --query &#039;Reservations[*].Instances[*].Tags[?Key==`Name`].Value&#039; --output text --instance-ids $instance_id
}

function get_ip {
  instance_id=$1
  aws --profile $profile ec2 describe-instances --query &#039;Reservations[0].Instances[0].PrivateIpAddress&#039; --output text --instance-ids $instance_id
}

function get_bootvol_id {
  instance_id=$1
  aws --profile $profile ec2 describe-instances --query &#039;Reservations[0].Instances[0].BlockDeviceMappings[0].Ebs.VolumeId&#039; --output text --instance-ids $instance_id
}

function get_az {
  instance_id=$1
  aws --profile $profile ec2 describe-instances --query &#039;Reservations[0].Instances[0].Placement.AvailabilityZone&#039; --output text --instance-ids $instance_id
}

function check_stopped {
  instance_id=$1
  aws --profile $profile ec2 wait instance-stopped --instance-ids $instance_id
}
# the difference between this and the previous function is subtle, but needed.
function is_stopped {
  instance_id=$1
  aws --profile $profile ec2 describe-instances --query &#039;Reservations[*].Instances[*].State.Name&#039; --instance-ids $instance_id --output text | grep -q &#039;stopped&#039;
}

function wait_snapshot {
  snapshot_id=$1
  aws --profile $profile ec2 wait snapshot-completed --snapshot-id $snapshot_id
}

function wait_volume {
  volume_id=$1
  aws --profile $profile ec2 wait volume-available --volume-id $volume_id
}

function detach_volume {
  volume_id=$1
  aws --profile $profile ec2 detach-volume --volume-id $volume_id &gt; /dev/null
}

function create_snapshot {
  volume_id=$1
  aws --profile $profile ec2 create-snapshot --volume-id $volume_id --description &quot;${volume_id}-snap&quot;  | sed &#039;s/&quot;//g&#039; | grep -oe &#039;SnapshotId: [a-z0-9-]*&#039; | cut -d&#039; &#039; -f 2
}

function create_volume_from_snap {
  snapshot_id=$1
  az=$2
  size=$3
  aws --profile $profile ec2 create-volume --availability-zone $az --snapshot-id $snapshot_id --size $size --volume-type gp2 | sed &#039;s/&quot;//g&#039; | grep -oe &#039;VolumeId: [a-z0-9-]*&#039; | cut -d&#039; &#039; -f 2
}

function attach_volume {
  volume_id=$1
  instance_id=$2
  device=$3
  aws --profile $profile ec2 attach-volume --volume-id $volume_id --instance-id $instance_id --device $device &gt; /dev/null
}

function do_resize {
  temp_instance_ip=$1
  temp_file=$( echo &quot;/tmp/$(date +%s).sh&quot; )

  # NAT or direct version
  #ssh_cmd=&quot;ssh -o StrictHostKeyChecking=no -i $pemfile $temp_username@$temp_instance_ip &quot;
  ssh_cmd=&quot;ssh -o StrictHostKeyChecking=no -i $pemfile $temp_username@localhost -p $tunnel_port &quot;
  $ssh_cmd yum -y install parted 
  partition_start=$($ssh_cmd parted $temp_device_host print | grep -e &#039;^ 1&#039; | tr -s &#039; &#039;  | cut -d&#039; &#039; -f3  )
  $ssh_cmd parted -s $temp_device_host rm 1
  $ssh_cmd parted -s $temp_device_host mkpart primary $partition_start 100%
  $ssh_cmd parted -s $temp_device_host set 1 boot on
  # for some reason, this command sometimes doesn&#039;t work and you still need to log into the instance when the script has finished and manually resize the filesystem
  $ssh_cmd &quot; e2fsck -p -y -f $( echo ${temp_device_host}1 ) &amp;&amp; resize2fs $( echo ${temp_device_host}1 ) &quot;
}

function stop_instance {
  instance_id=$1
  aws --profile $profile ec2 stop-instances --instance-ids $instance_id &gt; /dev/null
}
function start_instance {
  instance_id=$1
  aws --profile $profile ec2 start-instances --instance-ids $instance_id &gt; /dev/null
}

function wait_for_ssh {
  host=$1
  while ! $( nc -w 1 localhost $tunnel_port | grep -q OpenSSH ); do   
  # NAT instance
  #while ! nc -z $host 22 ; do   
    echo - n &quot;.&quot;
    sleep 1
  done
}

source_instance_name=$(get_instance_name $source_instance)
echo &quot;Starting resize&quot;
echo &quot;Resizing disk of host $source_instance_name&quot;

echo &quot;Getting info &amp; testing connections&quot;
temp_instance_name=$(get_instance_name $temp_instance) 
source_az=$( get_az $source_instance )
temp_source_az=$( get_az $temp_instance)
source_ip=$( get_ip $source_instance )
temp_ip=$( get_ip $temp_instance )
source_vol_id=$( get_bootvol_id $source_instance )

aws ec2 describe-instances --profile $profile --instance-id $source_instance &amp;&gt; /dev/null || { echo &quot;Source instance not found&quot;; exit 1; }
aws ec2 describe-instances --profile $profile --instance-id $temp_instance &amp;&gt; /dev/null || { echo &quot;Temporary instance not found&quot;; exit 1; }
nc -z localhost $tunnel_port || { echo &quot;Tunnel on port $tunnel_port is closed&quot;; exit 1; }
[[ $source_az == $temp_source_az ]] || { echo &quot;$temp_instance_name is not in the same AZ as $source_instance_name&quot;; exit 1; }
is_stopped $temp_instance || ssh -o StrictHostKeyChecking=no -i $pemfile $temp_username@localhost -p $tunnel_port grep -q &#039;root=/dev/xvda1&#039; /boot/grub/menu.lst || { echo &quot;Grub is incorrectly configured on $temp_instance_name&quot;; exit 1; }

echo &quot;Stopping instances&quot;
is_stopped $source_instance || stop_instance $source_instance 
is_stopped $temp_instance || stop_instance $temp_instance 
check_stopped $source_instance
check_stopped $temp_instance

echo &quot;Removing disk from $source_instance_name&quot;
detach_volume $source_vol_id

echo &quot;Creating snapshot&quot;
snap_id=$( create_snapshot $source_vol_id )
wait_snapshot $snap_id

echo &quot;Creating volume from snapshot )&quot;
new_vol_id=$( create_volume_from_snap $snap_id $source_az $new_size )
wait_volume $new_vol_id

echo &quot;Attaching volume to intermediate host $temp_instance_name&quot;
attach_volume $new_vol_id $temp_instance $temp_device_aws

echo &quot;Booting intermediate host $temp_instance_name&quot;
start_instance $temp_instance
wait_for_ssh $temp_ip
echo &quot;Resizing disk&quot;
do_resize $temp_ip

echo &quot;Attaching resized volume to $source_instance_name&quot;
stop_instance $temp_instance
check_stopped $temp_instance
detach_volume $new_vol_id
sleep 5
attach_volume $new_vol_id $source_instance /dev/sda1
sleep 5

echo &quot;Starting host $source_instance_name&quot;
start_instance $source_instance
echo &quot;Done&quot;
</code></pre>
<p>The post <a href="https://cloudar.be/awsblog/extend-ec2-boot-disk/">Extend ec2 boot disk</a> appeared first on <a href="https://cloudar.be">Cloudar</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://cloudar.be/awsblog/extend-ec2-boot-disk/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Ansible tomcat installation + .war deploy</title>
		<link>https://cloudar.be/awsblog/ansible-tomcat-installation-war-deploy/</link>
					<comments>https://cloudar.be/awsblog/ansible-tomcat-installation-war-deploy/#comments</comments>
		
		<dc:creator><![CDATA[Wannes Van Causbroeck]]></dc:creator>
		<pubDate>Thu, 25 Jun 2015 13:03:43 +0000</pubDate>
				<category><![CDATA[AWS Blog]]></category>
		<guid isPermaLink="false">https://cloudar.be/?p=3056</guid>

					<description><![CDATA[<p>Hi all, new employee here. To start learning about ansible and AWS, I created a small setup that does the following: create an aws instance install tomcat on it deploy a .war file from S3 and edit its contents While this is a very simple exercise, it does touch on some interesting topics about ansible [&#8230;]</p>
<p>The post <a href="https://cloudar.be/awsblog/ansible-tomcat-installation-war-deploy/">Ansible tomcat installation + .war deploy</a> appeared first on <a href="https://cloudar.be">Cloudar</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Hi all, new employee here.<br />
To start learning about ansible and AWS, I created a small setup that does the following:</p>
<ul>
<li>create an aws instance</li>
<li>install tomcat on it</li>
<li>deploy a .war file from S3 and edit its contents</li>
</ul>
<p>While this is a very simple exercise, it does touch on some interesting topics about ansible and aws.<br />
To start, we need something like this:<br />
<a href="https://cloudar.be/wp-content/uploads/2015/06/setup.png"><img fetchpriority="high" decoding="async" class="alignnone size-medium wp-image-3057" src="https://cloudar.be/wp-content/uploads/2015/06/setup-300x208.png" alt="setup" width="300" height="208" /></a></p>
<p>&nbsp;</p>
<ul>
<li>an S3 bucket to store our .war files</li>
<li>a nat host so ansible tower can manage servers from our vpc in the remote vpc</li>
<li>(and in our case an S3 read-only IAM policy as our version of ansible doesn&#8217;t support policy creation yet)</li>
</ul>
<h2>Tower config</h2>
<p>To allow our tower to connect through the nat host we need some extra config.<br />
In the root of our project we need an ansible.cfg file containing:</p>
<pre><code>
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=16m -F /opt/ansible/ssh/
</code></pre>
<p>This allows ansible to use the ssh configuration stored on the tower server. This is what it looks like:</p>
<pre><code>
Host [nat instance public ip]
  User [nat instance user]
  IdentityFile [nat instance_pem file]
Host [sandbox private subnet].*
  User [sandbox user]
  StrictHostKeyChecking no
  IdentityFile [sandbox instance pem file]
  ProxyCommand ssh -i [nat instance pem file] -o StrictHostKeyChecking=no [nat_instance_user]@[nat_instance_public_ip] nc %h %p
</code></pre>
<p>I won&#8217;t detail the tower configuration of the project/inventory/job/… here in detail as it&#8217;s quite straightforward.</p>
<h2>Ansible playbook</h2>
<p>So, we need two playbooks. The first one sets up a node and installs tomcat.</p>
<pre><code>
## site.yml
---
- name: Setup EC2
  hosts: tower-via-local
  gather_facts: no
  roles:
    - infrastructure
- name: install tomcat
  remote_user: ec2-user
  sudo: yes
  hosts: launched_servers
  roles:
    - tomcat
</code></pre>
<p>The second one deploys the war and edits a file.</p>
<pre><code>
### deploy.yml
---
- name: deploy app
  remote_user: ec2-user
  sudo: yes
  hosts: tag_Name_tomcat
  roles:
    - deploy_app
</code></pre>
<p>We need to split these up as both playbooks need a different inventory to work with:</p>
<ul>
<li>provision &amp; install: &#8216;tower-via-local&#8217; which is located in our VPC</li>
<li>configuration: here we get the sandbox vpc ec2 instances and select a specific tagname we gave during provisioning</li>
</ul>
<p>In theory, you could work with a callback to tower for the second part, but then you need to set up a way for the client to reach the server through the nat instance. As we don&#8217;t need this functionality right now, we&#8217;re skipping this step.</p>
<p>One of the nice things about tower is you can create simple forms provide variables to our playbooks. In this case I made two forms to provide the number of instances and the name of the war file.</p>
<pre><code>
# roles/infrastructure/tasks/main.yml
---
- name: Create the tomcat security group
  ec2_group:
    description: &quot;Open ssh and tomcat ports&quot;
    name: &quot;tomcat-sg&quot;
    region: &quot;{{ region }}&quot;
    rules:
      # only allow ssh access from the nat instance
      - proto: tcp
        from_port: 22
        to_port: 22
        cidr_ip: &quot;{{ nat_ip }}/32&quot;
      # open tomcat to the world
      - proto: tcp
        from_port: 8080
        to_port: 8080
        cidr_ip: 0.0.0.0/0
    purge_rules: yes
    state: present
    vpc_id: &quot;{{ vpc_id }}&quot;
  register: tomcat_sg 

# NOTE: only from ansible &gt; 2.0
# ansible can&#039;t create a role from an existing policy, so we&#039;re obligated to upload a new one (stored in roles/infrastructure/files)
#- name: Create S3 read-only access
#  iam_policy:
#    iam_name: s3-tomcat
#    iam_type: role
#    policy_document: s3-ro.json
#    policy_name: s3-ro
#    state: present

- name: Launch base server
  ec2:
    assign_public_ip: yes
    group_id: &quot;{{ tomcat_sg.group_id }}&quot;
    image: &quot;{{ tomcat_ami }}&quot;
    instance_type: &quot;{{ tomcat_instance_type }}&quot;
    instance_profile_name: s3-tomcat
    exact_count: &quot;{{ tomcat_instance_count }}&quot;
    count_tag: { &quot;Name&quot;: &quot;tomcat&quot; }
    key_name: &quot;{{ tomcat_kp }}&quot;
    region: &quot;{{ region }}&quot;
    vpc_subnet_id: &quot;{{ poc_subnet }}&quot;
    wait: no
    assign_public_ip: yes
    instance_tags: {
      &quot;Name&quot;: &quot;tomcat&quot;,
    }
  register: base_server 

# because wait_for doesn&#039;t use the ssh config we have to delegate this taks to the nat host. Otherwise tower wil try to connect to port 22 directly
- name: Check if we can ssh to the instance
  wait_for:
    host: &quot;{{ item.private_ip }}&quot;
    port: 22
    state: started
  with_items: base_server.instances
  when: item.state != &quot;terminated&quot;
  delegate_to: &quot;{{ nat_ip }}&quot;

# add all servers to a temporary group we can use to install tomcat. We need this group as in the current playbook we have &#039;tower-via-local&#039; as inventory
- name: Add servers to temporary group
  add_host:
    hostname: &quot;{{ item.private_ip }}&quot;
    groupname: launched_servers
  with_items: base_server.instances
  when: item.state != &quot;terminated&quot;
</code></pre>
<p>The next role just installs tomcat</p>
<pre><code>
---
- name: Install basic software
  yum:
    name: &quot;{{ item }}&quot;
    state: present
  with_items:
    - java-1.8.0-openjdk
    - tomcat8
</code></pre>
<p>Ok, so now we have some servers ready to go. The next playbook uses a different inventory</p>
<pre><code>
# roles/deploy_app/tasks/main.yml
---
# our war file is 
- name: Deploy war file
  s3: 
    bucket: &quot;{{ war_bucket }}&quot;
    object: &quot;{{ war_file }}&quot; 
    dest: &quot;{{ war_deploy_path }}/{{ war_file }}&quot;
    mode: get 
    overwrite: no
  register: war_downloaded

- name: Set correct permissions
  file: 
    path: &quot;{{ war_deploy_path }}/{{ war_file }}&quot;
    owner: tomcat
    group: tomcat
  when: war_downloaded.changed
  register: war_deployed

- name: Restart tomcat
  service:
    name: tomcat8
    state: restarted
  when: war_deployed.changed

# here we cheat a little. The sample.war I&#039;m deploying contains an index.html that we want to edit. We just wait untill the war is unpacked and the file is available
- name: Wait until war is deployed
  wait_for:
    path: &quot;{{ war_deploy_path }}/{{ app_name }}/index.html&quot;

- name: Edit index file
  lineinfile:
    dest: &quot;{{ war_deploy_path }}/{{ app_name }}/index.html&quot;
    regexp: &#039;^&lt;title&gt;Sample &quot;Hello, World&quot; Application&lt;/title&gt;&#039;
    line: &#039;&lt;title&gt;Sample &quot;Hello, from cloudar&quot; Application&lt;/title&gt;&#039;   
  lineinfile:
    dest: &quot;{{ war_deploy_path }}/{{ app_name }}/index.html&quot;
    regexp: &#039;^&lt;h1&gt;Sample &quot;Hello, World&quot; Application&lt;/h1&gt;&#039;
    line: &#039;&lt;h1&gt;Sample &quot;Hello, from cloudar&quot; Application&lt;/h1&gt;&#039;   
  when: war_deployed.changed
</code></pre>
<p>This concludes our little exercise! Feel free to leave comments, I&#8217;m sure there is still room for improvement.</p>
<p>The post <a href="https://cloudar.be/awsblog/ansible-tomcat-installation-war-deploy/">Ansible tomcat installation + .war deploy</a> appeared first on <a href="https://cloudar.be">Cloudar</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://cloudar.be/awsblog/ansible-tomcat-installation-war-deploy/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
	</channel>
</rss>
