<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Amazon Web Services Archives | Cloudar</title>
	<atom:link href="https://cloudar.be/tag/amazon-web-services/feed/" rel="self" type="application/rss+xml" />
	<link>https://cloudar.be/tag/amazon-web-services/</link>
	<description>100% Focus On AWS // 100% Customer Obsession</description>
	<lastBuildDate>Tue, 05 Oct 2021 14:32:48 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Why AWS NLB stickiness is not always sticky</title>
		<link>https://cloudar.be/awsblog/why-aws-nlb-stickiness-is-not-always-sticky/</link>
		
		<dc:creator><![CDATA[Rutger Beyen]]></dc:creator>
		<pubDate>Tue, 05 Oct 2021 14:32:48 +0000</pubDate>
				<category><![CDATA[AWS Blog]]></category>
		<category><![CDATA[Amazon Web Services]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[NLB]]></category>
		<category><![CDATA[stickiness]]></category>
		<guid isPermaLink="false">https://www.cloudar.be/?p=19684</guid>

					<description><![CDATA[<p>Why AWS NLB stickiness is not always sticky We were recently working on an AWS setup which involved a Network LoadBalancer (NLB) with a TCP listener and a requirement for sticky sessions. As we were seeing some strange behavior which we couldn&#8217;t immediately explain and which might be linked to the session stickiness we decided [&#8230;]</p>
<p>The post <a href="https://cloudar.be/awsblog/why-aws-nlb-stickiness-is-not-always-sticky/">Why AWS NLB stickiness is not always sticky</a> appeared first on <a href="https://cloudar.be">Cloudar</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h1>Why AWS NLB stickiness is not always sticky</h1>
<p>We were recently working on an AWS setup which involved a Network LoadBalancer (NLB) with a TCP listener and a requirement for sticky sessions. As we were seeing some strange behavior which we couldn&#8217;t immediately explain and which might be linked to the session stickiness we decided to make a small test setup.</p>
<h2>The problem</h2>
<p>Unlike an ALB where session stickiness is accomplished with cookies, the NLB uses a built-in 5-tuple hash table in order to maintain stickiness across backend servers. We access the NLB through its DNS name, which actually returns the IPs of the two NLB endpoints in a round-robin fashion with a TTL of 60 seconds.</p>
<p>We were looking for an answer on the following question: if our end-user would resolve the DNS and pick the IP of the first NLB endpoint to start the connection, the session will be routed towards one of the backend servers. But after 60 seconds the client could potentially re-issue the DNS query and start its connection with the other NLB endpoint. How will the stickiness and cross-zone loadbalancing behave? Will our end-user connection be routed to the initial server again, even if that means crossing AZ boundaries?</p>
<h2>The situation</h2>
<p><img fetchpriority="high" decoding="async" class="alignnone wp-image-19685" src="https://cloudar.be/wp-content/uploads/2021/10/nlb1-650x433.png" alt="" width="642" height="428" srcset="https://cloudar.be/wp-content/uploads/2021/10/nlb1-650x433.png 650w, https://cloudar.be/wp-content/uploads/2021/10/nlb1-325x217.png 325w" sizes="(max-width: 642px) 100vw, 642px" /></p>
<p>We started with a classical setup comprising of an NLB with an endpoint in each AZ and a Targetgroup having one instance as target in each AZ.</p>
<h3>Scenario #1</h3>
<ul>
<li>Cross-Zone loadbalancing: Disabled</li>
<li>TargetGroup Stickiness: Disabled</li>
</ul>
<p>How does it behave?</p>
<ol>
<li>Client connects to the IP of the first NLB node: the connection is redirected to the server in AZ 1.</li>
<li>Client connects to the IP of the second NLB node: the connection is redirect to the server in AZ 2.</li>
</ol>
<p>Since there is only one healthy target per AZ and cross-zone loadbalancing is not enabled, this situation results in &#8216;AZ-stickiness&#8217;: the traffic remains in the AZ in which it arrived. The setup relies on DNS to distribute client connections evenly across both NLB endpoints, but there is nothing to guarantee that a specific user connection is always directed to the same NLB endpoint, let alone to the same backend server.</p>
<h3>Scenario #2</h3>
<ul>
<li>Cross-Zone loadbalancing: Enabled</li>
<li>TargetGroup Stickiness: Disabled</li>
</ul>
<p>Allowing cross-zone loadbalancing and not requiring any stickness. This should give us complete randomness, shouldn&#8217;t it?</p>
<p>And so it does. We&#8217;re now completely randomized and hit every backend server, irrespective of our NLB endpoint &#8216;point of entry&#8217;. Works as expected.</p>
<h3><img decoding="async" class="alignnone size-medium wp-image-19686" src="https://cloudar.be/wp-content/uploads/2021/10/nlb2-650x433.png" alt="" width="650" height="433" srcset="https://cloudar.be/wp-content/uploads/2021/10/nlb2-650x433.png 650w, https://cloudar.be/wp-content/uploads/2021/10/nlb2-325x217.png 325w" sizes="(max-width: 650px) 100vw, 650px" /></h3>
<h3>Scenario #3</h3>
<ul>
<li>Cross-Zone loadbalancing: Disabled</li>
<li>TargetGroup Stickiness: Enabled</li>
</ul>
<p>We&#8217;ve enabled stickiness on the targetgroup now, and disabled the cross-zone loadbalancing again. Let&#8217;s hope our client connection is now sticky to a specific backend server.</p>
<ol>
<li>Client connects to the IP of the first NLB node: the connection is redirected to the server in AZ 1.</li>
<li>Client connects to the IP of the second NLB node: the connection is redirect to the server in AZ 2.</li>
</ol>
<p>Ok wait, we&#8217;ve asked our TargetGroup to be sticky, but still our connection is balanced over both backend servers? What&#8217;s going on?</p>
<p><img decoding="async" class="alignnone size-medium wp-image-19687" src="https://cloudar.be/wp-content/uploads/2021/10/nlb4-650x433.png" alt="" width="650" height="433" srcset="https://cloudar.be/wp-content/uploads/2021/10/nlb4-650x433.png 650w, https://cloudar.be/wp-content/uploads/2021/10/nlb4-325x217.png 325w" sizes="(max-width: 650px) 100vw, 650px" /></p>
<p>The fact that our NLB is not allowing cross-zone loadbalancing seems to prevent the connection from reaching the same backend every time. The connection enters via NLB endpoint 1 but stickiness has decided that the connection should go to server in AZ 2? Stickiness fails, the disabled cross-zone loadbalancing wins&#8230;</p>
<p>With only one healthy backend per AZ this behaves the same as not enabling stickiness at all. We&#8217;re pretty sure that with more than one backend per AZ the stickiness is maintained&#8230;within that AZ only. Interesting!</p>
<h3>Scenario #4</h3>
<ul>
<li>Cross-Zone loadbalancing: Enabled</li>
<li>TargetGroup Stickiness: Enabled</li>
</ul>
<p>Let&#8217;s solve this. We&#8217;ve enabled both cross-zone loadbalancing and targetgroup stickiness. We should hit the same backend server every time now.</p>
<p><img loading="lazy" decoding="async" class="alignnone size-medium wp-image-19688" src="https://cloudar.be/wp-content/uploads/2021/10/nlb3-650x433.png" alt="" width="650" height="433" srcset="https://cloudar.be/wp-content/uploads/2021/10/nlb3-650x433.png 650w, https://cloudar.be/wp-content/uploads/2021/10/nlb3-325x217.png 325w" sizes="auto, (max-width: 650px) 100vw, 650px" /></p>
<p>And so it does. Only now we reach true stickiness and hit the same backend server every time, no matter how hard we try by entering via the loadbalancer node in the other AZ.</p>
<h2>The conclusion</h2>
<p>If you don&#8217;t allow cross-zone loadbalancing, then stickiness is only active within AZ boundaries. As DNS round-robin could direct a client to a different point of entry after the TTL has expired, strict stickiness is not guaranteed.</p>
<p>So if you really need stickiness to a specific backend target, you need to allow cross-zone loadbalancing (and live with the extra cost of inter-AZ traffic). Only now do the different loadbalancer nodes share the hash table of &#8220;client-to-target&#8221; stickiness.</p>
<p>&nbsp;</p>
<p>Kinda logic, though&#8230;</p>
<p>&nbsp;</p>
<p>PS: NLB idle timeout for TCP connections is 350 seconds. Once the timeout is reached or the session is terminated, the NLB will forget the stickiness and incoming packets will be considered as a new flow and could be loadbalanced to a new target.</p>
<p>The post <a href="https://cloudar.be/awsblog/why-aws-nlb-stickiness-is-not-always-sticky/">Why AWS NLB stickiness is not always sticky</a> appeared first on <a href="https://cloudar.be">Cloudar</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>10 random things you probably didn&#8217;t know about Cloudar</title>
		<link>https://cloudar.be/awsblog/10-random-things-you-might-not-have-known-about-cloudar/</link>
		
		<dc:creator><![CDATA[Bart Van Hecke]]></dc:creator>
		<pubDate>Thu, 20 Aug 2020 09:57:41 +0000</pubDate>
				<category><![CDATA[AWS Blog]]></category>
		<category><![CDATA[Amazon Web Services]]></category>
		<category><![CDATA[AWS]]></category>
		<guid isPermaLink="false">https://www.cloudar.be/?p=18041</guid>

					<description><![CDATA[<p>The post <a href="https://cloudar.be/awsblog/10-random-things-you-might-not-have-known-about-cloudar/">10 random things you probably didn&#8217;t know about Cloudar</a> appeared first on <a href="https://cloudar.be">Cloudar</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="wpb-content-wrapper"><section id="ut-section-69b9c1d04ae12" data-vc-full-width="true" data-vc-full-width-init="false" data-cursor-skin="global" class="vc_section ut-vc-160 vc_section-has-no-fill ut-section-69b9c1d04ae23"><div id="ut-row-69b9c1d0943f2" data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_column-gap-0 ut-row-69b9c1d094406" ><div class="wpb_column vc_column_container vc_col-sm-12" ><div id="ut_inner_column_69b9c1d0a9784" class="vc_column-inner " ><div class="wpb_wrapper"><style type="text/css">#ut_title_divider_69b9c1d0a9afe { letter-spacing: 0em; }</style><h3 id="ut_title_divider_69b9c1d0a9afe" class="bklyn-title-divider  bklyn-divider-style-4 bklyn-title-divider-left bklyn-title-divider-tablet-left bklyn-title-divider-mobile-left"  ><span>1. The Summerbreeze Metal Festival</span></h3>
	<div class="wpb_text_column wpb_content_element" >
		<div class="wpb_wrapper">
			<p>In August 2013, Senne Vaeyens en Bart Van Hecke attended the Summerbreeze festival in Dinkelsbühl, Germany. Being long-time friends and already active in the wonderful world of Information Technology for several years, they discussed the possibility of starting a company together. After a long night of heavy music and lots of beers, the foundation of Cloudar was laid…</p>

		</div>
	</div>

<div id="vc-sep-69b9c1d0aa7bf" class="vc_separator wpb_content_element vc_separator_align_center vc_sep_width_100 vc_sep_pos_align_center vc_separator_no_text vc_sep_color_grey  wpb_content_element" ><span class="vc_sep_holder vc_sep_holder_l"><span  class="vc_sep_line"></span></span><span class="vc_sep_holder vc_sep_holder_r"><span  class="vc_sep_line"></span></span>
</div></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div id="ut-row-69b9c1d0ab24a" data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_column-gap-0 ut-row-69b9c1d0ab25d" ><div class="wpb_column vc_column_container vc_col-sm-12" ><div id="ut_inner_column_69b9c1d0aba2d" class="vc_column-inner " ><div class="wpb_wrapper"><style type="text/css">#ut_title_divider_69b9c1d0abaec { letter-spacing: 0em; }</style><h3 id="ut_title_divider_69b9c1d0abaec" class="bklyn-title-divider  bklyn-divider-style-4 bklyn-title-divider-left bklyn-title-divider-tablet-left bklyn-title-divider-mobile-left"  ><span>2. Cloudar started as a Cloud Broker</span></h3>
	<div class="wpb_text_column wpb_content_element" >
		<div class="wpb_wrapper">
			<p>Cloudar was started as a Cloud Broker company, acting as a trusted advisor for customers and providing a single point of contact regardless of the cloud technology the customers were using. After evaluating this setup for a few months, Senne &amp; Bart noticed that real focus was missing. They decided that it would be better to become great at only one thing than to be mediocre at several. As Amazon Web Services (AWS) was already the preferred platform for several projects, it was decided that Cloudar would only focus on AWS and become a trustworthy AWS partner.</p>
<p>&nbsp;</p>
<p><em><strong>Fun fact</strong>: The name Cloudar originates from the combination of the word &#8220;Cloud&#8221; and &#8220;Radar&#8221;; When the company started as a cloud broker, we would be the &#8216;radar&#8217; for our customers to find the best cloud provider that would suit their needs. Nowadays Cloudar stands for &#8220;Cloud Architects&#8221;, but the radar remained in the company logo&#8230;</em></p>

		</div>
	</div>

<div id="vc-sep-69b9c1d0ac019" class="vc_separator wpb_content_element vc_separator_align_center vc_sep_width_100 vc_sep_pos_align_center vc_separator_no_text vc_sep_color_grey  wpb_content_element" ><span class="vc_sep_holder vc_sep_holder_l"><span  class="vc_sep_line"></span></span><span class="vc_sep_holder vc_sep_holder_r"><span  class="vc_sep_line"></span></span>
</div></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div id="ut-row-69b9c1d0ac8fe" data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_column-gap-0 ut-row-69b9c1d0ac90c" ><div class="wpb_column vc_column_container vc_col-sm-12" ><div id="ut_inner_column_69b9c1d0ad0ce" class="vc_column-inner " ><div class="wpb_wrapper"><style type="text/css">#ut_title_divider_69b9c1d0ad18a { letter-spacing: 0em; }</style><h3 id="ut_title_divider_69b9c1d0ad18a" class="bklyn-title-divider  bklyn-divider-style-4 bklyn-title-divider-left bklyn-title-divider-tablet-left bklyn-title-divider-mobile-left"  ><span>3. Cloudar joined the Cronos Group in October 2014</span></h3>
	<div class="wpb_text_column wpb_content_element" >
		<div class="wpb_wrapper">
			<p>To maintain focus on AWS, it became clear that we needed to surround ourselves with experts in other areas of the IT spectrum. As the Cronos Group is an incubator that exists of multiple independent companies with specific expertise, the choice of joining this group was a no-brainer.</p>

		</div>
	</div>

<div id="vc-sep-69b9c1d0ad6ab" class="vc_separator wpb_content_element vc_separator_align_center vc_sep_width_100 vc_sep_pos_align_center vc_separator_no_text vc_sep_color_grey  wpb_content_element" ><span class="vc_sep_holder vc_sep_holder_l"><span  class="vc_sep_line"></span></span><span class="vc_sep_holder vc_sep_holder_r"><span  class="vc_sep_line"></span></span>
</div></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div id="ut-row-69b9c1d0adfb0" data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_column-gap-0 ut-row-69b9c1d0adfc1" ><div class="wpb_column vc_column_container vc_col-sm-12" ><div id="ut_inner_column_69b9c1d0ae72b" class="vc_column-inner " ><div class="wpb_wrapper"><style type="text/css">#ut_title_divider_69b9c1d0ae7e5 { letter-spacing: 0em; }</style><h3 id="ut_title_divider_69b9c1d0ae7e5" class="bklyn-title-divider  bklyn-divider-style-4 bklyn-title-divider-left bklyn-title-divider-tablet-left bklyn-title-divider-mobile-left"  ><span>4. First hire in January 2015</span></h3>
	<div class="wpb_text_column wpb_content_element" >
		<div class="wpb_wrapper">
			<p>In January 2015, Ben Bridts joined the company and is up to present day the go-to AWS expert of the company. Ben has in-depth AWS knowledge, helps defining Cloudar’s technical strategy and is an official AWS Ambassador (<a href="https://www.ambassador-lounge.com/ambassadors/ben-bridts/">https://www.ambassador-lounge.com/ambassadors/ben-bridts/</a>). Hook him up if you have questions about AWS (@benbridts)</p>

		</div>
	</div>

<div id="vc-sep-69b9c1d0aed26" class="vc_separator wpb_content_element vc_separator_align_center vc_sep_width_100 vc_sep_pos_align_center vc_separator_no_text vc_sep_color_grey  wpb_content_element" ><span class="vc_sep_holder vc_sep_holder_l"><span  class="vc_sep_line"></span></span><span class="vc_sep_holder vc_sep_holder_r"><span  class="vc_sep_line"></span></span>
</div></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div id="ut-row-69b9c1d0af61e" data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_column-gap-0 ut-row-69b9c1d0af634" ><div class="wpb_column vc_column_container vc_col-sm-12" ><div id="ut_inner_column_69b9c1d0afdeb" class="vc_column-inner " ><div class="wpb_wrapper"><style type="text/css">#ut_title_divider_69b9c1d0afe90 { letter-spacing: 0em; }</style><h3 id="ut_title_divider_69b9c1d0afe90" class="bklyn-title-divider  bklyn-divider-style-4 bklyn-title-divider-left bklyn-title-divider-tablet-left bklyn-title-divider-mobile-left"  ><span>5. In April 2015, Cloudar joins Xplore Group</span></h3>
	<div class="wpb_text_column wpb_content_element" >
		<div class="wpb_wrapper">
			<p>The Cronos Group is divided into several clusters; some clusters specialize in ‘traditional IT services’, while others have a focus in entirely other areas. Xplore Group is one of these clusters and has a clear focus on E-commerce, Data Science, Cloud Native Development, IoT, Machine Learning, …</p>
<p>As Cloudar prefers to talk with the business directly, while maintaining a good relationship with developers as well, the Xplore Group cluster seemed a good fit.</p>
<p>Together with multiple Xplore Group companies, Cloudar is now able to execute large enterprise projects with a scope that extends the expertise of AWS.</p>

		</div>
	</div>

<div id="vc-sep-69b9c1d0b03a2" class="vc_separator wpb_content_element vc_separator_align_center vc_sep_width_100 vc_sep_pos_align_center vc_separator_no_text vc_sep_color_grey  wpb_content_element" ><span class="vc_sep_holder vc_sep_holder_l"><span  class="vc_sep_line"></span></span><span class="vc_sep_holder vc_sep_holder_r"><span  class="vc_sep_line"></span></span>
</div></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div id="ut-row-69b9c1d0b0c24" data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_column-gap-0 ut-row-69b9c1d0b0c33" ><div class="wpb_column vc_column_container vc_col-sm-12" ><div id="ut_inner_column_69b9c1d0b13bf" class="vc_column-inner " ><div class="wpb_wrapper"><style type="text/css">#ut_title_divider_69b9c1d0b148f { letter-spacing: 0em; }</style><h3 id="ut_title_divider_69b9c1d0b148f" class="bklyn-title-divider  bklyn-divider-style-4 bklyn-title-divider-left bklyn-title-divider-tablet-left bklyn-title-divider-mobile-left"  ><span>6. Some numbers (August 2020)</span></h3>
	<div class="wpb_text_column wpb_content_element" >
		<div class="wpb_wrapper">
			<ul>
<li>Current headcount: 32 (and growing)</li>
<li>Customer base: &gt;100</li>
<li>Monthly AWS Spent: &gt; $1M</li>
<li>Total of AWS Certifications: &gt;100</li>
<li>AWS programs &amp; competencies:</li>
</ul>
<p style="padding-left: 40px;">&#8211; Premier Consulting Partner</p>
<p style="padding-left: 40px;">&#8211; Solution Provider</p>
<p style="padding-left: 40px;">&#8211; Public Sector Partner</p>
<p style="padding-left: 40px;">&#8211; Well-Architected Partner</p>
<p style="padding-left: 40px;">&#8211; Immersion Day Partner</p>
<p style="padding-left: 40px;">&#8211; Managed Service Provider Partner</p>
<p style="padding-left: 40px;">&#8211; Migration Competency</p>
<p style="padding-left: 40px;">&#8211; DevOps Competency</p>
<p style="padding-left: 40px;">&#8211; Government Competency</p>
<p style="padding-left: 40px;">&#8211; Lambda Service Delivery</p>

		</div>
	</div>

<div id="vc-sep-69b9c1d0b199b" class="vc_separator wpb_content_element vc_separator_align_center vc_sep_width_100 vc_sep_pos_align_center vc_separator_no_text vc_sep_color_grey  wpb_content_element" ><span class="vc_sep_holder vc_sep_holder_l"><span  class="vc_sep_line"></span></span><span class="vc_sep_holder vc_sep_holder_r"><span  class="vc_sep_line"></span></span>
</div></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div id="ut-row-69b9c1d0b21aa" data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_column-gap-0 ut-row-69b9c1d0b21bb" ><div class="wpb_column vc_column_container vc_col-sm-12" ><div id="ut_inner_column_69b9c1d0b2912" class="vc_column-inner " ><div class="wpb_wrapper"><style type="text/css">#ut_title_divider_69b9c1d0b29d4 { letter-spacing: 0em; }</style><h3 id="ut_title_divider_69b9c1d0b29d4" class="bklyn-title-divider  bklyn-divider-style-4 bklyn-title-divider-left bklyn-title-divider-tablet-left bklyn-title-divider-mobile-left"  ><span>7. Partnerships</span></h3>
	<div class="wpb_text_column wpb_content_element" >
		<div class="wpb_wrapper">
			<p>It goes without saying that great things can only be achieved if you surround yourself with the right partners…</p>
<p>While AWS remains our focus, you need partners to be able to deliver the full picture to your customers. Some solid partnerships we established throughout the years are:</p>
<ul>
<li>CloudCheckr: Mainly used for cost optimization and providing best practices to our customers</li>
<li>VMware: to help us supporting VMC on AWS</li>
<li>Trend Micro: For compliance reasons and securing AWS environments</li>
<li>N2WS: for backup and disaster recovery purposes</li>
<li>Site 24&#215;7: for monitoring &amp; alerting</li>
<li>The Cronos Group &amp; Xplore Group: for delivering expertise in areas other than AWS</li>
</ul>

		</div>
	</div>

<div id="vc-sep-69b9c1d0b2f3a" class="vc_separator wpb_content_element vc_separator_align_center vc_sep_width_100 vc_sep_pos_align_center vc_separator_no_text vc_sep_color_grey  wpb_content_element" ><span class="vc_sep_holder vc_sep_holder_l"><span  class="vc_sep_line"></span></span><span class="vc_sep_holder vc_sep_holder_r"><span  class="vc_sep_line"></span></span>
</div></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div id="ut-row-69b9c1d0b384a" data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_column-gap-0 ut-row-69b9c1d0b385b" ><div class="wpb_column vc_column_container vc_col-sm-12" ><div id="ut_inner_column_69b9c1d0b3fc9" class="vc_column-inner " ><div class="wpb_wrapper"><style type="text/css">#ut_title_divider_69b9c1d0b406a { letter-spacing: 0em; }</style><h3 id="ut_title_divider_69b9c1d0b406a" class="bklyn-title-divider  bklyn-divider-style-4 bklyn-title-divider-left bklyn-title-divider-tablet-left bklyn-title-divider-mobile-left"  ><span>8. Why would you engage us?</span></h3>
	<div class="wpb_text_column wpb_content_element" >
		<div class="wpb_wrapper">
			<p>When you decide to work with us, you’ll be working with a team of AWS Certified Professionals that have the highest level of demonstrated expertise and skills using the AWS Cloud:</p>
<ul>
<li>We can do the heavy lifting for you</li>
<li>We’ll make sure you only pay what you need</li>
<li>We can lower the cost of your ‘migration bubble’</li>
<li>We can assist in accelerating your projects, so you’ll be able to reach your deadlines</li>
<li>We can educate your staff</li>
<li>We can take care of your AWS environment 24/7</li>
</ul>

		</div>
	</div>

<div id="vc-sep-69b9c1d0b4540" class="vc_separator wpb_content_element vc_separator_align_center vc_sep_width_100 vc_sep_pos_align_center vc_separator_no_text vc_sep_color_grey  wpb_content_element" ><span class="vc_sep_holder vc_sep_holder_l"><span  class="vc_sep_line"></span></span><span class="vc_sep_holder vc_sep_holder_r"><span  class="vc_sep_line"></span></span>
</div></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div id="ut-row-69b9c1d0b4d49" data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_column-gap-0 ut-row-69b9c1d0b4d57" ><div class="wpb_column vc_column_container vc_col-sm-12" ><div id="ut_inner_column_69b9c1d0b54e2" class="vc_column-inner " ><div class="wpb_wrapper"><style type="text/css">#ut_title_divider_69b9c1d0b557f { letter-spacing: 0em; }</style><h3 id="ut_title_divider_69b9c1d0b557f" class="bklyn-title-divider  bklyn-divider-style-4 bklyn-title-divider-left bklyn-title-divider-tablet-left bklyn-title-divider-mobile-left"  ><span>9. CSF</span></h3>
	<div class="wpb_text_column wpb_content_element" >
		<div class="wpb_wrapper">
			<p>We apply the “Common Sense Framework” (CSF) on a daily basis.</p>
<p>Although we are ISO/IEC 27001 certified for Information Security to comply with the highest security standards and combine several IT methodologies (ITIL, Agile, DevOps, Prince 2,…), the most important framework we use is CSF. CSF is and always will be the backbone of Cloudar. Every person, situation, customer or project is different, so we need to be able to adapt quickly and apply common sense wherever we can. Common sense enables us to be very flexible and allows us to anticipate very fast in every situation.</p>
<p>CSF is here to stay!</p>
<p>&nbsp;</p>
<p><strong><span style="color: #808080;"><em>Note: In the Flemish part of Belgium, CSF is also known as &#8220;Gezond Boerenverstand&#8221;</em></span></strong></p>

		</div>
	</div>

<div id="vc-sep-69b9c1d0b5a88" class="vc_separator wpb_content_element vc_separator_align_center vc_sep_width_100 vc_sep_pos_align_center vc_separator_no_text vc_sep_color_grey  wpb_content_element" ><span class="vc_sep_holder vc_sep_holder_l"><span  class="vc_sep_line"></span></span><span class="vc_sep_holder vc_sep_holder_r"><span  class="vc_sep_line"></span></span>
</div></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div id="ut-row-69b9c1d0b635d" data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_column-gap-0 ut-row-69b9c1d0b636e" ><div class="wpb_column vc_column_container vc_col-sm-12" ><div id="ut_inner_column_69b9c1d0b6aba" class="vc_column-inner " ><div class="wpb_wrapper"><style type="text/css">#ut_title_divider_69b9c1d0b6b8e { letter-spacing: 0em; }</style><h3 id="ut_title_divider_69b9c1d0b6b8e" class="bklyn-title-divider  bklyn-divider-style-4 bklyn-title-divider-left bklyn-title-divider-tablet-left bklyn-title-divider-mobile-left"  ><span>10. The future</span></h3>
	<div class="wpb_text_column wpb_content_element" >
		<div class="wpb_wrapper">
			<p>Well, that’s a tricky one…</p>
<p>In this ever-changing world, who knows what the future might bring? It has been our goal from day one to become one of the leading AWS partners in the EMEA region, and I think we can say that we did a good job so far.</p>
<p>For the following years, we’ll keep on growing as a company, expand our customer base and explore new areas, just as we always did before. Now is not the time to lay back and think the journey is finished. In fact, I believe our journey has only begun; with so much opportunities in front of us, it&#8217;s time to ramp things up a gear!</p>
<p>To quote Mario Andretti; “If everything seems under control, you’re not going fast enough…”</p>

		</div>
	</div>

<div id="vc-sep-69b9c1d0b706c" class="vc_separator wpb_content_element vc_separator_align_center vc_sep_width_100 vc_sep_pos_align_center vc_separator_no_text vc_sep_color_grey  wpb_content_element" ><span class="vc_sep_holder vc_sep_holder_l"><span  class="vc_sep_line"></span></span><span class="vc_sep_holder vc_sep_holder_r"><span  class="vc_sep_line"></span></span>
</div></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><a data-id="section-without-id" class="ut-vc-offset-anchor-bottom" name="section-without-id"></a></section><div class="vc_row-full-width vc_clearfix"></div>
</div><p>The post <a href="https://cloudar.be/awsblog/10-random-things-you-might-not-have-known-about-cloudar/">10 random things you probably didn&#8217;t know about Cloudar</a> appeared first on <a href="https://cloudar.be">Cloudar</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Validate ACM certificates in Cloudformation</title>
		<link>https://cloudar.be/awsblog/validate-acm-certificates-in-cloudformation/</link>
		
		<dc:creator><![CDATA[Michiel Vanderlinden]]></dc:creator>
		<pubDate>Wed, 08 Jan 2020 08:03:21 +0000</pubDate>
				<category><![CDATA[AWS Blog]]></category>
		<category><![CDATA[acm]]></category>
		<category><![CDATA[Amazon Web Services]]></category>
		<category><![CDATA[automatically validate acm]]></category>
		<category><![CDATA[automation]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[cloudformation]]></category>
		<category><![CDATA[custom resource]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[python]]></category>
		<guid isPermaLink="false">https://www.cloudar.be/?p=16482</guid>

					<description><![CDATA[<p>Intro: We will use a custom resource written in Python that will be able to create ACM certificates with DNS validation. The custom resource will also automatically validate this certificate if the validation domain is managed by a Route53 hosted zone. We will also be able to specify an AWS region to create the certificate [&#8230;]</p>
<p>The post <a href="https://cloudar.be/awsblog/validate-acm-certificates-in-cloudformation/">Validate ACM certificates in Cloudformation</a> appeared first on <a href="https://cloudar.be">Cloudar</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h3>Intro:</h3>
<p>We will use a custom resource written in Python that will be able to create ACM certificates with DNS validation. The custom resource will also automatically validate this certificate if the validation domain is managed by a Route53 hosted zone. We will also be able to specify an AWS region to create the certificate in, this region is independent of the Cloudformation stack region which for example makes it possible to deploy a certificate in region us-east-1 (to use with cloudfront) while deploying the stack in region eu-west-1. The resource will also provide the certificate arn as an output parameter so it can be used by other resources in the stack. Lastly when you delete the custom resource it will cleanup all validation records and the certificate itself.</p>
<h3>Requirements:</h3>
<ul>
<li>Python3</li>
<li>Pip</li>
<li>Bash</li>
<li>Zip</li>
<li>An S3 bucket to deploy the custom resource package on</li>
<li>A hosted zone for the validation record</li>
</ul>
<h3>Implementation:</h3>
<p>Let&#8217;s get started by downloading all the required code from our <a href="https://github.com/WeAreCloudar/cloudar_acm_plus">GitHub repository.</a></p>
<h4>Step1: Uploading the custom resource package</h4>
<p>In this step we are going to prepare the custom resource package and upload it to an S3 bucket.</p>
<p>First we go into the custom resource directory.<br />
<code>cd cloudar-acm-plus-custom-resource</code></p>
<p>Next we execute a script to install all required dependencies.<br />
<code>sh install_dependencies</code></p>
<p>Now we are ready to create the package.<br />
<code>sh pack_custom_resource</code></p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-16487 " src="https://cloudar.be/wp-content/uploads/2020/01/auto_validate_acm_package_commands.png" alt="package commands" width="595" height="378" srcset="https://cloudar.be/wp-content/uploads/2020/01/auto_validate_acm_package_commands.png 1120w, https://cloudar.be/wp-content/uploads/2020/01/auto_validate_acm_package_commands-768x488.png 768w" sizes="auto, (max-width: 595px) 100vw, 595px" /></p>
<p>You will now find the zipfile &#8216;cloudar-acm-plus-custom-resource.zip&#8217; in &#8216;cloudar-acm-plus-custom-resource/packed&#8217;, upload this zip file to your S3 bucket.</p>
<h4>Step2: Creating a Cloudformation template</h4>
<p>Now we can create a Cloudformation template in which we use this custom resource to create an ACM certificate.<br />
You can use the template &#8216;cfn.yaml&#8217; as an example.</p>
<p>First create a Lambda resource as following<br />
<img loading="lazy" decoding="async" class="alignnone wp-image-16492 " src="https://cloudar.be/wp-content/uploads/2020/01/auto_validate_acm_cfn_lambda.png" alt="auto validate lambda" width="520" height="475" srcset="https://cloudar.be/wp-content/uploads/2020/01/auto_validate_acm_cfn_lambda.png 1042w, https://cloudar.be/wp-content/uploads/2020/01/auto_validate_acm_cfn_lambda-768x702.png 768w, https://cloudar.be/wp-content/uploads/2020/01/auto_validate_acm_cfn_lambda-788x720.png 788w" sizes="auto, (max-width: 520px) 100vw, 520px" /></p>
<p>Use the name of your bucket for the property &#8216;S3Bucket&#8217; .</p>
<p>Next we create the custom resource.<br />
<img loading="lazy" decoding="async" class="alignnone wp-image-16493 " src="https://cloudar.be/wp-content/uploads/2020/01/auto_validate_cfn_cr.png" alt="auto validate cfn cr" width="454" height="287" srcset="https://cloudar.be/wp-content/uploads/2020/01/auto_validate_cfn_cr.png 832w, https://cloudar.be/wp-content/uploads/2020/01/auto_validate_cfn_cr-768x486.png 768w" sizes="auto, (max-width: 454px) 100vw, 454px" /></p>
<p>We can set the following properties here:</p>
<ul>
<li>DomainName: (REQUIRED type:String) The domain name for the acm certificate.</li>
<li>AdditionalDomains: (OPTIONAL type:List) Additional domains for the acm certificate</li>
<li>ValidationDomain: (REQUIRED type:string) The domain name for the validation domain of the acm certificate</li>
<li>HostedZoneId: (REQUIRED type:string) The hosted zone id for the validation domain of the acm certificate</li>
<li>CertificateRegion: (REQUIRED type:string) The region to deploy the acm certificate in</li>
<li>IdempotencyToken: (REQUIRED type:string pattern: \w+) The idempotency token for the create call of the acm certificate doc: <a href="https://docs.aws.amazon.com/acm/latest/APIReference/API_RequestCertificate.html#ACM-RequestCertificate-request-IdempotencyToken" rel="nofollow">https://docs.aws.amazon.com/acm/latest/APIReference/API_RequestCertificate.html#ACM-RequestCertificate-request-IdempotencyToken</a></li>
<li>CertificateTags: (OPTIONAL type:list) The tags for the acm certificate</li>
</ul>
<p>In order for the DNS record cleanup and delete certificate functionality to work when you delete the Cloudformation stack it is important to set the following output.<br />
<img loading="lazy" decoding="async" class="alignnone wp-image-16495 " src="https://cloudar.be/wp-content/uploads/2020/01/auto_validate_cfn_output.png" alt="auto validate cfn output" width="666" height="98" srcset="https://cloudar.be/wp-content/uploads/2020/01/auto_validate_cfn_output.png 1238w, https://cloudar.be/wp-content/uploads/2020/01/auto_validate_cfn_output-768x113.png 768w" sizes="auto, (max-width: 666px) 100vw, 666px" /></p>
<p>As you can see we can access the arn of the certifcate created by the custom resource with the GetAtt function on the resource.<br />
<code>!GetAtt CreateCertificateCustomResource.certificate_arn</code></p>
<h4>Step3: Deploy the cloudformation</h4>
<p>Finally the only thing left to do is deploy the Cloudformation template.<br />
Once the deploy is started Cloudformation will create the Lambda containing the code from step1 and start a custom resource which will create the certificate and validation records. Once the status of the certificate becomes &#8216;ISSUED&#8217; the custom resource will finish successfully and report the arn of the certificate back to Cloudformation. We can now further use this arn in other resources in the Cloudformation template.<br />
When you delete the Cloudformation stack, the custom resource will cleanup the validation records in the hosted zone and delete the certificate.</p>
<p>CREATE_COMPLETE</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-16496 size-full" src="https://cloudar.be/wp-content/uploads/2020/01/auto_validate_acm_icon.png" alt="auto validate acm icon" width="300" height="259" /></p>
<p>The post <a href="https://cloudar.be/awsblog/validate-acm-certificates-in-cloudformation/">Validate ACM certificates in Cloudformation</a> appeared first on <a href="https://cloudar.be">Cloudar</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Security Incident: Be Prepared &#8211; Memory Dumps</title>
		<link>https://cloudar.be/awsblog/security-incident-be-prepared-memory-dumps/</link>
					<comments>https://cloudar.be/awsblog/security-incident-be-prepared-memory-dumps/#respond</comments>
		
		<dc:creator><![CDATA[Koenraad de Boevé]]></dc:creator>
		<pubDate>Sun, 25 Nov 2018 08:43:45 +0000</pubDate>
				<category><![CDATA[AWS Blog]]></category>
		<category><![CDATA[ACL]]></category>
		<category><![CDATA[Amazon Web Services]]></category>
		<category><![CDATA[Forensics]]></category>
		<category><![CDATA[Memory Dump]]></category>
		<category><![CDATA[security]]></category>
		<guid isPermaLink="false">https://cloudar.be/?p=9688</guid>

					<description><![CDATA[<p>Memory Dumps You just finished setting up your super-duper AWS environment.. Highly available &#38; Fault Tolerant: check! Backups in place: check! MFA enforced: check! Security Groups and NACLs: check! CloudTrail enabled: check! You even deserve bonus points for activating Amazon GuardDuty and putting AWS WAF &#38; Shield in front of your CloudFront distribution and loadbalancers. [&#8230;]</p>
<p>The post <a href="https://cloudar.be/awsblog/security-incident-be-prepared-memory-dumps/">Security Incident: Be Prepared &#8211; Memory Dumps</a> appeared first on <a href="https://cloudar.be">Cloudar</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2>Memory Dumps</h2>
<p>You just finished setting up your super-duper AWS environment..</p>
<p>Highly available &amp; Fault Tolerant: check!<br />
Backups in place: check!<br />
MFA enforced: check!<br />
Security Groups and NACLs: check!<br />
CloudTrail enabled: check!</p>
<p>You even deserve bonus points for activating Amazon GuardDuty and putting AWS WAF &amp; Shield in front of your CloudFront distribution and loadbalancers.<br />
Time to lean back with a bit of smugness, while you take a sip of your well-deserved cup of coffee.</p>
<p>Seems like you have covered all your bases, or have you?</p>
<p>One often overlooked topic is security incident response.<br />
While a lot of security incidents can (and should) be mitigated through automation,<br />
some incidents will require manual intervention, such as information and evidence gathering during and after a successful malicious attack on one of your instances.</p>
<p>Effective incident response and forensics require preparation, well ahead of time.<br />
It is critical to have your forensics and remediation tools readily available for whenever the proverbial shit hits the fan.<br />
Documenting your investigative steps and being able to execute them swiftly, will contribute to a well-controlled, thorough and effective investigation.</p>
<p>In this blog post, I&#8217;d like to focus on some of the first steps you might take in your investigation:</p>
<ul>
<li>Building a forensics workstation and  taking a memory dump of a compromised instance.</li>
<li>Preparation steps and tools, both for windows and linux.</li>
<li>The forensics investigation process.</li>
<li>An investigation of a real memory dump.</li>
</ul>
<p>Although some great tools are already available from <a href="https://www.threatresponse.cloud/" target="_blank" rel="noopener noreferrer">Threadresponse</a> (such as AWS_IR and Margarita Shotgun), we will build a solution ourselves, in order to gain a thorough knowledge on how the process works behind the scenes.<br />
Creating a memory dump is  something that  should be done immediately, as it provides a snapshot of the memory at the time of the attack.<br />
The dump can then be analyzed and used to build a timeline, improve security after the fact and optionally provide evidence for any follow up with law enforcement.</p>
<h3>Tools</h3>
<ul>
<li>Linux: LiME ( Linux Memory Extractor )</li>
<li>Windows: <a href="https://marketing.accessdata.com/ftkimager4.2.0" target="_blank" rel="noopener noreferrer">FTK</a> or <a href="https://belkasoft.com/get" target="_blank" rel="noopener noreferrer">Belkasoft Live  RamCapturer</a><br />
( both are free, but require registration, you get the download link in a mail )</li>
<li>Volatility</li>
</ul>
<h3>Preparation Steps</h3>
<h4>Create Quarantine and Forensic Security Groups</h4>
<ol>
<li><strong>Forensics Security Group.</strong><br />
This SG will be attached to your Forensics Workstation later on.</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true ">aws ec2 create-security-group --group-name ForensicsSG \
--description "Forensics SG" --vpc-id &lt;your vpc id&gt; \
--profile &lt;your profile&gt;</pre>
<p>This will output something like this:</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:default decode:true ">{
    "GroupId": "sg-22222222222222222"
}</pre>
<p>Now we use this GroupId to add an inbound rule that allows  SSH  connection towards our Forensics Workstation</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:default decode:true">aws ec2 authorize-security-group-ingress --group-id sg-22222222222222222 \
--protocol tcp --port 22 --cidr &lt;your cidr&gt; --profile &lt;your profile</pre>
<p>By default , outbound everything is allowed, and in this case, that is ok, so we leave it at that.</li>
<li><strong>Quarantine Security Group</strong><br />
This SG will be attached to the compromised instance, to isolate it from any network, except the forensics network.</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true">aws ec2 create-security-group --group-name QuarantineSG \
--description "Quarantine SG" --vpc-id &lt;your vpc id&gt; \
--profile &lt;your profile&gt;
</pre>
<p>Output will look similar to the following:</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true">{
    "GroupId": "sg-111111111111111111"
}</pre>
<p>Remove all rules from outbound ( egress )</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true">aws ec2 revoke-security-group-egress --group-id sg-11111111111111111 \
--ip-permissions '[{"IpProtocol": "-1","IpRanges": [{"CidrIp": "0.0.0.0/0"}],"Ipv6Ranges": [{"CidrIpv6": "::/0"}]}]' \
--profile &lt;your profile&gt;</pre>
<p>Add rules to allow access from the Forensics Security Group</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:default decode:true ">aws ec2 authorize-security-group-ingress --group-id sg-11111111111111111 \
--ip-permissions '[
    {"IpProtocol":"tcp","FromPort":22,"ToPort":22,"UserIdGroupPairs":[
        {"GroupId": "sg-22222222222222222","Description": "SSH access from the ForensicsSG"}
    ]},
    {"IpProtocol":"tcp","FromPort":4444,"ToPort":4444,"UserIdGroupPairs":[
        {"GroupId":"sg-22222222222222222","Description":"Access from the ForensicsSG for LiME dump over TCP"}
    ]},
    {"IpProtocol":"tcp","FromPort":3389,"ToPort":3389,"UserIdGroupPairs":[
        {"GroupId":"sg-22222222222222222","Description":"RDP access from the ForensicsSG"}
    ]}
    ]' \
--profile &lt;your profile&gt;</pre>
</li>
<li><strong>Create Isolation Functionality</strong><br />
Create lambda-execution-forensics-trust-policy.json file with following content:</p>
<pre class="theme:eclipse toolbar-overlay:false nums:false nums-toggle:false lang:default decode:true">{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}</pre>
<p>Create the Lambda Execution role, forensics-lambda-exec-role, using the policy file you just created:</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false wrap:true lang:sh decode:true">aws iam create-role \
--role-name forensics-lambda-exec-role  \
--assume-role-policy-document file://lambda-execution-forensics-trust-policy.json \
--profile &lt;your profile&gt;</pre>
<p>The output should look like this:</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:default decode:true ">{
    "Role": {
        "AssumeRolePolicyDocument": {
            "Version": "2012-10-17", 
            "Statement": [
                {
                    "Action": "sts:AssumeRole", 
                    "Effect": "Allow", 
                    "Principal": {
                        "Service": "lambda.amazonaws.com"
                    }
                }
            ]
        }, 
        "RoleId": "AROAILWIMFHDIIGTDST7I", 
        "CreateDate": "2018-11-20T20:42:44Z", 
        "RoleName": "forensics-lambda-exec-role", 
        "Path": "/", 
        "Arn": "arn:aws:iam::111111111111:role/forensics-lambda-exec-role"
    }
}</pre>
<p>Create lambda-forensics-policy.json with following content:</p>
<pre class="theme:eclipse toolbar-overlay:false nums:false expand-toggle:false whitespace-after:1 lang:default decode:true">{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "logs:CreateLogStream",
                "logs:PutLogEvents",
                "logs:CreateLogGroup"
            ],
            "Resource": "arn:aws:logs:*:*:*",
            "Effect": "Allow"
        },
        {
            "Action": [
                "ec2:Describe*",
                "ec2:ModifyNetworkInterfaceAttribute",
                "ec2:ModifyInstanceAttribute"
            ],
            "Resource": "*",
            "Effect": "Allow"
        }
    ]
}</pre>
<p>Create the policy:</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false wrap-toggle:false expand-toggle:false lang:sh decode:true">aws iam create-policy --policy-name lambda-execute-forensics-policy \
--policy-document file://lambda-forensics-policy.json 
--profile &lt;your profile&gt;</pre>
<p>This will output something like this:</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:default decode:true">{
    "Policy": {
        "PolicyName": "lambda-execute-forensics-policy", 
        "PermissionsBoundaryUsageCount": 0, 
        "CreateDate": "2018-11-20T20:56:04Z", 
        "AttachmentCount": 0, 
        "IsAttachable": true, 
        "PolicyId": "ANPAIG23L72JCRWAQQEFE", 
        "DefaultVersionId": "v1", 
        "Path": "/", 
        "Arn": "arn:aws:iam::111111111111:policy/lambda-execute-forensics-policy", 
        "UpdateDate": "2018-11-20T20:56:04Z"
    }
}</pre>
<p>Attach the policy to the forensics-lambda-exec-role, using the ARN found in the output from the previous comand</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true">aws iam attach-role-policy --role-name forensics-lambda-exec-role \
--policy-arn arn:aws:iam::111111111111:policy/lambda-execute-forensics-policy \
--profile &lt;your profile&gt;</pre>
<p>Create index.py with following content:</p>
<pre class="theme:eclipse nums:false nums-toggle:false wrap-toggle:false expand-toggle:false lang:python decode:true">import json
import boto3
import os

ec2 = boto3.resource('ec2')
quarantine_sg = os.environ['QUARANTINE_SG']

def set_SecurityGroup(instance):
  oldsecuritygroups = {}
  interfaces = instance.network_interfaces
  for interface in interfaces:
    oldsecuritygroups[interface.id] =  interface.groups
    interface.modify_attribute(Groups = [quarantine_sg])

  return oldsecuritygroups

def lambda_handler(event, context):
  instance = ec2.Instance(event['instance_id'])
  # Remove current Security Groups
  orig_groups = set_SecurityGroup(instance)
  return {
    'statusCode': 200,
    'body': "OK",
    'replaced_sgs': orig_groups
  }

</pre>
<p>This function will replace all Security Groups on all attached network interfaces of an instance with the Quarantine Security Group we created earlier. When you discover that an instance is compromised, you can invoke this function, and it will isolate the instance.  It fetches the Security Group from the environment, and takes the instance id of the compromised instance as input as json in the following format:</p>
<pre class="theme:eclipse toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:default decode:true ">{
    "instance_id": "i-xxxxxxxxxxxxxxxxx"
}</pre>
<p>The function will output the removed Security Groups so you have a reference for later on.</p>
<p>zip the file to index.zip</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true ">zip index.zip index.py</pre>
<p>Create the Lambda function, using the IAM Policy ARN from the output of the role creation (forensics-lambda-exec-role)</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false wrap-toggle:false expand-toggle:false lang:sh decode:true">aws lambda create-function --function-name forensics-isolate-instance \
--zip-file fileb://index.zip \
--role arn:aws:iam::111111111111:role/forensics-lambda-exec-role \
--handler index.lambda_handler --runtime python3.6 \
--environment Variables={QUARANTINE_SG=sg-11111111111111111} \
--profile &lt;your profile&gt;</pre>
<p>OK, Done, now let s move on and create a forensics workstation.</li>
</ol>
<h4>Build a Forensics Workstation</h4>
<ol>
<li><strong>Create EC2 Instance Role for the forensics Workstation<br />
</strong>Create ec2-forensics-trust-policy.json file with following content:</p>
<pre class="theme:eclipse toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:default decode:true ">{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}</pre>
<p>Create the EC2 Instance role, ec2-forensics-role, using the Policy file you just created:</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true">aws iam create-role --role-name ec2-forensics-role \
--assume-role-policy-document file://ec2-forensics-trust-policy.json \
--profile &lt;your profile&gt;</pre>
<p>Output:</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:default decode:true">{
    "Role": {
        "AssumeRolePolicyDocument": {
            "Version": "2012-10-17", 
            "Statement": [
                {
                    "Action": "sts:AssumeRole", 
                    "Effect": "Allow", 
                    "Principal": {
                        "Service": "ec2.amazonaws.com"
                    }
                }
            ]
        }, 
        "RoleId": "AROAIGXSH7RCMPCVPY4QA", 
        "CreateDate": "2018-11-21T10:34:07Z", 
        "RoleName": "ec2-forensics-role", 
        "Path": "/", 
        "Arn": "arn:aws:iam::111111111111:role/ec2-forensics-role"
    }
}
</pre>
<p>Create ec2-forensics-policy.json with following content:</p>
<pre class="theme:eclipse toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:default decode:true">{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowEC2InstanceEC2Forensics",
            "Effect": "Allow",
            "Action": [
                "ec2:Describe*",
                "ec2:AuthorizeSecurityGroupIngress",
                "ec2:ModifyVolumeAttribute",
                "ec2:CreateKeyPair",
                "ec2:ReportInstanceStatus",
                "ec2:ModifySnapshotAttribute",
                "ec2:RevokeSecurityGroupEgress",
                "ec2:ImportKeyPair",
                "ec2:CreateTags",
                "ec2:StopInstances",
                "ec2:RevokeSecurityGroupIngress",
                "ec2:AttachVolume",
                "ec2:ImportVolume",
                "ec2:ModifySubnetAttribute",
                "ec2:CreateSnapshot",
                "ec2:RebootInstances",
                "ec2:ImportInstance",
                "ec2:ResetSnapshotAttribute",
                "ec2:ImportSnapshot",
                "ec2:CopySnapshot",
                "ec2:CreateImage",
                "ec2:CopyImage",
                "ec2:GetLaunchTemplateData",
                "ec2:ImportImage",
                "ec2:DetachVolume",
                "ec2:CreateFlowLogs",
                "ec2:GetConsoleOutput",
                "ec2:CreateSecurityGroup",
                "ec2:CreateNetworkAcl",
                "ec2:ModifyInstanceAttribute",
                "ec2:AuthorizeSecurityGroupEgress",
                "ec2:DetachNetworkInterface",
                "ec2:CreateNetworkAclEntry"
            ],
            "Resource": "*"
        },
        {
            "Sid": "AllowEC2InstanceInvokeLambda",
            "Effect": "Allow",
            "Action": "lambda:InvokeFunction",
            "Resource": "arn:aws:lambda:eu-west-1:111111111111:function:forensics-isolate-instance"
        }
    ]
}</pre>
<p>Create the ec2-forensics-policy IAM Policy:</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true">aws iam create-policy --policy-name ec2-forensics-policy \
--policy-document file://ec2-forensics-policy.json \
--profile &lt;your profile&gt;</pre>
<p>Attach the policy to the ec2-forensics-role, using the ARN found in the output from the previous command.<br />
The Forensics Workstation also needs access to S3 and AWS Systems Manager, so we also include some AWS Managed Policies</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true">aws iam attach-role-policy --role-name ec2-forensics-role \
--policy-arn arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM \
--profile &lt;your profile&gt;
aws iam attach-role-policy --role-name ec2-forensics-role \
--policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess \
--profile &lt;your profile&gt;
aws iam attach-role-policy --role-name ec2-forensics-role \
--policy-arn arn:aws:iam::111111111111:policy/ec2-forensics-policy \
--profile &lt;your profile&gt;</pre>
<p>Create an Instance Profile, called ec2-forensics-profile</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true">aws iam create-instance-profile \
--instance-profile-name ec2-forensics-profile \
--profile &lt;your profile&gt;</pre>
<p>Attach the ec2-forensics-role to the ec2-forensics-profile</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true ">aws iam add-role-to-instance-profile \
--instance-profile-name ec2-forensics-profile \
--role-name ec2-forensics-role \
--profile &lt;your profile&gt;</pre>
</li>
<li><strong>Provision the Forensics Workstation</strong><br />
Create user-data.txt script with following content:</p>
<pre class="theme:eclipse toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true ">#!/bin/bash

# Install prerequisites
sudo yum -y update
sudo yum install python-pip pcre-tools gcc autoconf automake libtool nc git kernel-devel libdwarf-tools
pip install distorm3 pycrypto pillow openpyxl ujson pytz IPython

# Install Volatility
cd /home/ec2-user
wget http://downloads.volatilityfoundation.org/releases/2.6/volatility-2.6.zip
unzip volatility-2.6.zip
mv volatility-master volatility
chown -R ec2-user.ec2-user volatility

# Install LiME
git clone https://github.com/504ensicsLabs/LiME.git
chown -R ec2-user.ec2-user LiME</pre>
<p>Create the EC2 Instance</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true">aws ec2 run-instances --image-id ami-09693313102a30b2c \
--count 1 --instance-type t3.micro --key-name MyKeyPair \
--security-group-ids sg-22222222222222222 \
--subnet-id subnet-33333333333333333 \
--user-data file://user-data.txt \
--tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value="My Forensics WS"}]' \
--iam-instance-profile Name=ec2-forensics-profile \
--profile &lt;your profile&gt;</pre>
<p>Tip: You can now fetch the latest Amazon AMI by querying the SSM Parameter Store:</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true ">aws ssm get-parameters \
--names /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2 \
--profile &lt;your profile&gt;
{
    "InvalidParameters": [], 
    "Parameters": [
        {
            "Name": "/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2", 
            "LastModifiedDate": 1542668357.322, 
            "Value": "ami-09693313102a30b2c", 
            "Version": 11, 
            "Type": "String", 
            "ARN": "arn:aws:ssm:eu-west-1::parameter/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2"
        }
    ]
}
</pre>
<p>We now have a basic workstation.</li>
<li><strong>Create a forensic volume</strong><br />
We will need a volume we can attach/detach from any instance to temporarily store memory dumps and provide tools to the compromised instance.<br />
Since we need to support both Windows and Linux, we create an exfat filesystem on it, as it is supported on all platforms.</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true">aws ec2 create-volume --availability-zone eu-west-1a \ 
--size 100 --volume-type gp2 --tag-specifications \
'ResourceType=volume,Tags=[{Key=Name,Value=exFat-Forensics-Volume}]' \
--profile &lt;your profile&gt;</pre>
<p>This creates a 100 GB volume. You can lower the size , but make sure that it is comfortably bigger then the largest ram size on any of your instances.</p>
<p>Connect to the Forensics Workstation and attach the new volume ( you probably need to run aws configure to set your default region first )</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true ">aws ec2 attach-volume --device /dev/sdX \
--volume-id &lt;volumeid&gt; \
--instance-id $(curl http://169.254.169.254/latest/meta-data/instance-id)</pre>
<p>Partition and format the disk</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true" title="Partitioning">sudo fdisk /dev/sdh

Welcome to fdisk (util-linux 2.30.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x601015b4.

Command (m for help): p
Disk /dev/sdh: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x601015b4

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 
First sector (2048-209715199, default 2048): 
Last sector, +sectors or +size{K,M,G,T,P} (2048-209715199, default 209715199): 

Created a new partition 1 of type 'Linux' and of size 100 GiB.

Command (m for help): t
Selected partition 1          
Hex code (type L to list all codes): 7
Changed type of partition 'Linux' to 'HPFS/NTFS/exFAT'.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

</pre>
<pre class="theme:dark-terminal nums:false nums-toggle:false expand-toggle:false lang:sh decode:true " title="Formatting">sudo mkfs.vfat /dev/sdh1</pre>
<p>Mounting the volume</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true">sudo mount /dev/sdhX /mnt</pre>
<p>Copy your local LiME and Volatility folders to the volume, to be able to compile a kernel module or create a Volatility profile on the fly, just in case.<br />
For Windows, we copy  RamCapturer or/and FTK Imager to the volume as well.</p>
<p>Once you have all tools on the volume, unmount it, detach it and make a snapshot.</p>
<p>Next step, we will gather LiME kernel modules and create Volatility Profiles for instances, running in our environment.<br />
The LiME kernel module needs to be loaded on the compromised instance</li>
</ol>
<h4>Create LiME kernel modules and Volatility Profiles</h4>
<p>Note that you need to compile the kernel module for the EXACT kernel version , running on your instances. If you don&#8217;t patch your systems and/or have a lot of different Linux flavors, you will have a hard time maintaining the lime kernel modules and volatility profiles.</p>
<p>You have 2 options in obtaining a lime kernel module matching your kernel:</p>
<ol>
<li><strong>threadresponse lime module repository<br />
</strong><a href="https://threatresponse-lime-modules.s3.amazonaws.com/" target="_blank" rel="noopener noreferrer">https://threatresponse-lime-modules.s3.amazonaws.com/</a><br />
This opens an xmlfile in which the available lime modules can be found<br />
Here is an excerpt of that xml file.</p>
<pre class="theme:eclipse toolbar-overlay:false nums-toggle:false expand-toggle:false lang:default decode:true">&lt;ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"&gt;
&lt;Name&gt;threatresponse-lime-modules&lt;/Name&gt;
&lt;Prefix/&gt;
&lt;Marker/&gt;
&lt;-- SNIP --&gt;
&lt;Contents&gt;
&lt;Key&gt;
modules/lime-2.6.32-131.0.15.el6.centos.plus.x86_64.ko
&lt;/Key&gt;
&lt;LastModified&gt;2018-09-06T18:40:37.000Z&lt;/LastModified&gt;
&lt;ETag&gt;"93efeac2519a4c6a573601d203416098"&lt;/ETag&gt;
&lt;Size&gt;1098692&lt;/Size&gt;
&lt;StorageClass&gt;STANDARD&lt;/StorageClass&gt;
&lt;/Contents&gt;
&lt;Contents&gt;
&lt;Key&gt;
modules/lime-2.6.32-131.0.15.el6.centos.plus.x86_64.ko.sig
&lt;/Key&gt;
&lt;LastModified&gt;2018-09-06T18:41:34.000Z&lt;/LastModified&gt;
&lt;ETag&gt;"c4d44af3b2265e55e8c23b8dc62d8828"&lt;/ETag&gt;
&lt;Size&gt;566&lt;/Size&gt;
&lt;StorageClass&gt;STANDARD&lt;/StorageClass&gt;
&lt;/Contents&gt;
..</pre>
<p>The reference to the kernel modules  ( in the above example on line 8 ) is  the s3 object key to the object.<br />
So if your kernel is 2.6.32-131.0.15.el6, you can download it from https://threatresponse-lime-modules.s3.amazonaws.com/modules/lime-2.6.32-131.0.15.el6.centos.plus.x86_64.ko</li>
<li><strong>build your own repository</strong><br />
If you cannot find the lime modules for kernels running in your environment, you can build your own.<br />
Install volatility and LiME on either an existing instance, or launch a new instance of  the Linux distribution for which you want to create a module and volatility profile.For Redhat/CentOS/ Amazon Linux, you can use the userdata from the Forensics Workstation (2 .Provision the Forensics Workstation)<br />
For Debian/Ubuntu you can use this user-data content:</p>
<pre class="theme:eclipse toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true">#!/bin/bash
sudo apt -y update
sudo apt -y install libelf-dev libdwarf-dev dwarfdump zip make gcc
cd /home/ubuntu
wget http://downloads.volatilityfoundation.org/releases/2.6/volatility-2.6.zip
unzip volatility.zip
mv volatility-master volatility
chown -R ubuntu.ubuntu volatility
# Install LiME
git clone https://github.com/504ensicsLabs/LiME.git
chown -R ubuntu.ubuntu LiME</pre>
<ol>
<li>Compile the LiME module<br />
First, Make sure you have kernel headers, source and images installed on this instance.</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true ">cd ~/LiME/src
make</pre>
<p>This should have created a file called lime-$(uname -r).ko<br />
If you need to create a module for a different kernel version ( for example for an older unpatched instance ),<br />
install the required version and change the above command to:</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true ">make KVER=&lt;kernel version&gt;</pre>
<p>Note that this does not require the targeted kernel version to be active</li>
<li>Create Volatility Profiles
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true">cd ~/volatility/tools/linux

# change os to reflect your distro
os="ubuntu"
# change kernel version to the kernel version you want to compile for
# do make sure this kernel is installed ( including headers, source and boot image
# however, the kernel version does not need to be actively running
kernel_version=$(uname -r)
make KVER=${kernel_version}
zip ~/${os}-${kernel_version}.zip module.dwarf /boot/System.map-${kernel_version}</pre>
</li>
<li>Copy the lime module (.ko file) and volatility profile  (.zip file) to an S3 Bucket that acts as a central repository.</li>
</ol>
</li>
</ol>
<h4>Incident Workflow</h4>
<ol>
<li>Logon onto the Forensics WorkStation<br />
ssh into your Forensics Workstation in 2 terminal windows. ( Use ssh-agent, ssh-add and ssh -A)</li>
<li>Isolate the compromised instance
<pre class="theme:dark-terminal toolbar-overlay:false expand-toggle:false lang:sh decode:true">aws lambda invoke --function-name forensics-isolate-instance \
--payload '{"instance_id": "&lt;compromised instance id&gt;"}' /tmp/output.txt
</pre>
</li>
<li>Create a snapshot of the compromised instance
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true ">aws ec2 create-snapshot \
--volume-id &lt;volume-id of compromised instances root volume&gt; \
--tag-specifications \
'ResourceType=snapshot,Tags=[{Key=Name,Value=compromised-instance-snap}]'</pre>
</li>
<li>Attach the Forensics Volume to the compromised EC2 Instance and mount it.<br />
On The Forensics Workstation:</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true">aws ec2 attach-volume --device /dev/sdX \
--volume-id &lt;forensics volume id&gt; \
--instance-id &lt;compromised instance id&gt;</pre>
<p>On the compromised EC2 Instance:</p>
<p><strong>Linux:</strong><br />
ssh into the compromised instance, fetch the kernel version and mount the volume.<br />
We need the kernel version for the next steps</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true">uname -r
sudo mount /dev/sdX1 /mnt</pre>
<p><strong>Windows:</strong><br />
RDP into the compromised instance ( you need to setup an SSH Tunnel via the Forensics Workstation )<br />
Mount the volume via Computer Management -&gt; Disk Management<br />
<img loading="lazy" decoding="async" class="wp-image-9760 alignnone" src="https://cloudar.be/wp-content/uploads/2018/11/WindowsDiskManagement.png" alt="" width="375" height="269" srcset="https://cloudar.be/wp-content/uploads/2018/11/WindowsDiskManagement.png 1976w, https://cloudar.be/wp-content/uploads/2018/11/WindowsDiskManagement-768x550.png 768w, https://cloudar.be/wp-content/uploads/2018/11/WindowsDiskManagement-1536x1101.png 1536w, https://cloudar.be/wp-content/uploads/2018/11/WindowsDiskManagement-1005x720.png 1005w" sizes="auto, (max-width: 375px) 100vw, 375px" /><br />
The new Volume is attached and identified as Disk 1 , but it is in an offline state.<br />
Right-click on Disk 1 and select &#8216;Online&#8217;<br />
<img loading="lazy" decoding="async" class="alignnone wp-image-9761" src="https://cloudar.be/wp-content/uploads/2018/11/WindowsDiskOnline.png" alt="" width="424" height="128" srcset="https://cloudar.be/wp-content/uploads/2018/11/WindowsDiskOnline.png 1152w, https://cloudar.be/wp-content/uploads/2018/11/WindowsDiskOnline-768x232.png 768w" sizes="auto, (max-width: 424px) 100vw, 424px" /><br />
The Volume is now online and mapped to D:<br />
<img loading="lazy" decoding="async" class="alignnone wp-image-9762" src="https://cloudar.be/wp-content/uploads/2018/11/WindowsDiskOnlineResult.png" alt="" width="405" height="314" srcset="https://cloudar.be/wp-content/uploads/2018/11/WindowsDiskOnlineResult.png 1820w, https://cloudar.be/wp-content/uploads/2018/11/WindowsDiskOnlineResult-768x597.png 768w, https://cloudar.be/wp-content/uploads/2018/11/WindowsDiskOnlineResult-1536x1193.png 1536w, https://cloudar.be/wp-content/uploads/2018/11/WindowsDiskOnlineResult-927x720.png 927w" sizes="auto, (max-width: 405px) 100vw, 405px" /></p>
<p>On Windows, you can now skip to Step 7</li>
<li>In the other terminal ( Forensics WS ), fetch the lime kernel module from your S3 Bucket or the Threadresponse repository</li>
<li>scp the LiME module to the compromised instance</li>
<li>Run the memory dump on the compromised instance<br />
<strong>On Linux:</strong></p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true">sudo insmod /path/to/lime-$(uname -r).ko "path=/mnt/ram.lime format=lime digest=sha1"</pre>
<p>This will create the memory dump file ram.lime and the digest file ram.sha1 on the forensics volume.</p>
<p><strong>On Windows:</strong><br />
Open File Explorer, and go to D:\<br />
If  RamCapturer is not yet unzipped, unzip RamCapturer.zip first.<br />
Then run D:\RamCapturer\x64\RamCapturer.exe as Administrator<br />
<img loading="lazy" decoding="async" class="alignnone wp-image-9763" src="https://cloudar.be/wp-content/uploads/2018/11/runRamCapturerAsAdmin.png" alt="" width="353" height="340" srcset="https://cloudar.be/wp-content/uploads/2018/11/runRamCapturerAsAdmin.png 1250w, https://cloudar.be/wp-content/uploads/2018/11/runRamCapturerAsAdmin-768x740.png 768w, https://cloudar.be/wp-content/uploads/2018/11/runRamCapturerAsAdmin-748x720.png 748w" sizes="auto, (max-width: 353px) 100vw, 353px" /><br />
<img loading="lazy" decoding="async" class="alignnone wp-image-9764" src="https://cloudar.be/wp-content/uploads/2018/11/RamCapturerStart.png" alt="" width="360" height="187" srcset="https://cloudar.be/wp-content/uploads/2018/11/RamCapturerStart.png 1082w, https://cloudar.be/wp-content/uploads/2018/11/RamCapturerStart-768x399.png 768w" sizes="auto, (max-width: 360px) 100vw, 360px" /><br />
Save the dump to D:\ and run &#8216;Capture!&#8217;<br />
The dump will be saved as YYYYMMDD.mem where YYYYMMDD is the current date.</li>
<li>Fetch the memory dump onto the Forensics WorkStation<br />
unmount the Forensics Volume on Linux, or , on Windows, put it offline again using the Disk Management.<br />
Detach the volume from the compromised EC2 Instance and attach it back to the Forensics Workstation.</li>
<li>Stop the Compromised instance.</li>
</ol>
<h4>Sample Results from Memory Dumps</h4>
<p>Volatility requires a profile matching your kernel. For Windows this is already included , but for Linux , you might need the volatility profile to be imported into your volatility setup.<br />
let&#8217;s first test if the profile for your kernel is already configured:</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true ">cd volatility
python vol.py --info | grep Profile
Volatility Foundation Volatility Framework 2.6
Profiles
Linuxamzn-4_14_72-73_55_amzn2_x86_64x64 - A Profile for Linux amzn-4.14.72-73.55.amzn2.x86_64 x64
Linuxamzn-4_14_77-81_59_amzn2_x86_64x64 - A Profile for Linux amzn-4.14.77-81.59.amzn2.x86_64 x64
VistaSP0x64                             - A Profile for Windows Vista SP0 x64
VistaSP0x86                             - A Profile for Windows Vista SP0 x86
VistaSP1x64                             - A Profile for Windows Vista SP1 x64
VistaSP1x86                             - A Profile for Windows Vista SP1 x86
VistaSP2x64                             - A Profile for Windows Vista SP2 x64
VistaSP2x86                             - A Profile for Windows Vista SP2 x86
Win10x64                                - A Profile for Windows 10 x64
Win10x64_10240_17770                    - A Profile for Windows 10 x64 (10.0.10240.17770 / 2018-02-10)

...</pre>
<p>If your kernel is not listed, you can add it by copying the volatility profile ( created in section 2.2 Create Volatility Profile )<br />
to the <span class="s1">volatility/plugins/overlays/linux/ directory.<br />
Rerunning the above command should show your added profile.</span></p>
<p>OK, Pièce de résistance: some results:</p>
<p>Fetching lsof for the linux memory dump:</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true ">python vol.py -f /mnt/ram-20181122.lime \
--profile Linuxamzn-4_14_72-73_55_amzn2_x86_64x64 linux_lsof

Volatility Foundation Volatility Framework 2.6
Offset             Name                           Pid      FD       Path
------------------ ------------------------------ -------- -------- ----
0xffff88001c628000 systemd                               1        0 /dev/null
0xffff88001c628000 systemd                               1        1 /dev/null
0xffff88001c628000 systemd                               1        2 /dev/null
0xffff88001c628000 systemd                               1        3 anon_inode:[6744]
0xffff88001c628000 systemd                               1        4 anon_inode:[6744]
0xffff88001c628000 systemd                               1        5 anon_inode:[6744]
0xffff88001c628000 systemd                               1        6 /sys/fs/cgroup/systemd
0xffff88001c628000 systemd                               1        7 anon_inode:[6744]
0xffff88001c628000 systemd                               1        8 socket:[14109]
0xffff88001c628000 systemd                               1        9 /proc/1/mountinfo

-- SNIP --

0xffff8800182525c0 sudo                              23415        3 pipe:[1758814]
0xffff8800182525c0 sudo                              23415        4 pipe:[1758814]
0xffff8800182525c0 sudo                              23415        5 socket:[1758834]
0xffff8800182525c0 sudo                              23415        6 socket:[1758839]
0xffff880018254b80 insmod                            23416        0 /dev/pts/0
0xffff880018254b80 insmod                            23416        1 /dev/pts/0
0xffff880018254b80 insmod                            23416        2 /dev/pts/0
0xffff880018254b80 insmod                            23416        3 /mnt/lime-modules/amazon/lime-4.14.72-73.55.amzn2.x86_64.ko</pre>
<p>List all established connections on Linux</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true ">python vol.py -f /mnt/ram-20181122.lime --profile Linuxamzn-4_14_72-73_55_amzn2_x86_64x64 linux_netstat | grep EST
Volatility Foundation Volatility Framework 2.6
TCP      10.100.4.6      :   22 10.100.4.111    :44854 ESTABLISHED                  sshd/23161
TCP      10.100.4.6      :   22 10.100.4.111    :44854 ESTABLISHED                  sshd/23179
</pre>
<p>List all open ports on Linux</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true ">python vol.py -f /mnt/ram-20181122.lime --profile Linuxamzn-4_14_72-73_55_amzn2_x86_64x64 linux_netstat | grep LISTEN
Volatility Foundation Volatility Framework 2.6
TCP      0.0.0.0         :  111 0.0.0.0         :    0 LISTEN                    rpcbind/2662 
TCP      ::              :  111 ::              :    0 LISTEN                    rpcbind/2662 
TCP      127.0.0.1       :   25 0.0.0.0         :    0 LISTEN                     master/3143 
TCP      0.0.0.0         :   22 0.0.0.0         :    0 LISTEN                       sshd/3273 
TCP      ::              :   22 ::              :    0 LISTEN                       sshd/3273</pre>
<p>List all processes on Linux</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:default decode:true ">python vol.py -f /mnt/ram-20181122.lime --profile Linuxamzn-4_14_72-73_55_amzn2_x86_64x64 linux_pslist
Volatility Foundation Volatility Framework 2.6
Offset             Name                 Pid             PPid            Uid             Gid    DTB                Start Time
------------------ -------------------- --------------- --------------- --------------- ------ ------------------ ----------
0xffff88001c628000 systemd              1               0               0               0      0x000000001b014000 2018-11-15 16:22:54 UTC+0000
0xffff88001c62a5c0 kthreadd             2               0               0               0      ------------------ 2018-11-15 16:22:54 UTC+0000
0xffff88001c648000 kworker/0:0H         4               2               0               0      ------------------ 2018-11-15 16:22:54 UTC+0000
0xffff88001c64cb80 mm_percpu_wq         6               2               0               0      ------------------ 2018-11-15 16:22:54 UTC+0000
0xffff88001c690000 ksoftirqd/0          7               2               0               0      ------------------ 2018-11-15 16:22:54 UTC+0000
0xffff88001c6925c0 rcu_sched            8               2               0               0      ------------------ 2018-11-15 16:22:54 UTC+0000
0xffff88001c694b80 rcu_bh               9               2               0               0      ------------------ 2018-11-15 16:22:54 UTC+0000
0xffff88001c698000 migration/0          10              2               0               0      ------------------ 2018-11-15 16:22:54 UTC+0000
0xffff88001c69a5c0 watchdog/0           11              2               0               0      ------------------ 2018-11-15 16:22:54 UTC+0000

-- SNIP --

0xffff88000890cb80 sshd                 23161           3273            0               0      0x000000000a35c000 2018-11-22 12:56:43 UTC+0000
0xffff880019598000 sshd                 23179           23161           1000            1000   0x000000000a246000 2018-11-22 12:56:43 UTC+0000
0xffff88001679a5c0 bash                 23180           23179           1000            1000   0x0000000008990000 2018-11-22 12:56:43 UTC+0000
0xffff880017b30000 kworker/u30:0        23287           2               0               0      ------------------ 2018-11-22 12:57:16 UTC+0000
0xffff8800182525c0 sudo                 23415           23180           0               0      0x000000000a308000 2018-11-22 12:59:15 UTC+0000
0xffff880018254b80 insmod               23416           23415           0               0      0x00000000005a6000 2018-11-22 12:59:15 UTC+0000</pre>
<p>There are other interesting possibilities , to get an idea what you can query run volatility with the -h option</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true">python vol.py --info | grep linux_</pre>
<p>This will list all comands, available for linux memory dumps.<br />
Note that some commands might not work, because they are not supported for a specific profile.</p>
<p>Process list on Windows</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true">volatility]$ python vol.py -f /mnt/20181122.mem  --profile Win2016x64_14393 pslist
Volatility Foundation Volatility Framework 2.6
Offset(V)          Name                    PID   PPID   Thds     Hnds   Sess  Wow64 Start                          Exit                          
------------------ -------------------- ------ ------ ------ -------- ------ ------ ------------------------------ ------------------------------
0xffffbf0b9de5e500 System                    4      0    144        0 ------      0 2018-11-18 23:32:48 UTC+0000                                 
0xffffbf0b9e6ec040 smss.exe                388      4      2        0 ------      0 2018-11-18 23:32:49 UTC+0000                                 
0xffffbf0b9e6a2080 csrss.exe               524    516      9        0      0      0 2018-11-18 23:33:00 UTC+0000                                 
0xffffbf0b9e98f080 smss.exe                584    388      0 --------      1      0 2018-11-18 23:33:01 UTC+0000                                 
0xffffbf0b9e9a7080 csrss.exe               592    584      9        0      1      0 2018-11-18 23:33:01 UTC+0000                                 
0xffffbf0b9e9f3080 wininit.exe             608    516      1        0      0      0 2018-11-18 23:33:01 UTC+0000                                 
0xffffbf0b9e9c1080 winlogon.exe            644    584      2        0      1      0 2018-11-18 23:33:01 UTC+0000                                 
0xffffbf0b9ec7b080 services.exe            704    608      4        0      0      0 2018-11-18 23:33:01 UTC+0000                                 
0xffffbf0b9ec81080 lsass.exe               712    608      7        0      0      0 2018-11-18 23:33:02 UTC+0000                                 
0xffffbf0b9ecc4380 svchost.exe             784    704     16        0      0      0 2018-11-18 23:33:03 UTC+0000                                 
0xffffbf0b9eceb840 svchost.exe             836    704     11        0      0      0 2018-11-18 23:33:03 UTC+0000

-- SNIP --

0xffffbf0b9f44e840 userinit.exe           3120   2204      0 --------      2      0 2018-11-21 22:12:45 UTC+0000                                 
0xffffbf0b9f4d5840 explorer.exe           3136   3120     60        0      2      0 2018-11-21 22:12:45 UTC+0000                                 
0xffffbf0b9e252840 TabTip.exe             3148    996     12        0      2      0 2018-11-21 22:12:45 UTC+0000                                 
0xffffbf0b9f249840 TabTip32.exe           3212   3148      1        0      2      1 2018-11-21 22:12:46 UTC+0000                                 
0xffffbf0b9f5fd840 ShellExperienc         3988    784     20        0      2      0 2018-11-21 22:12:56 UTC+0000                                 
0xffffbf0b9f44c340 SearchUI.exe           4084    784     16        0      2      0 2018-11-21 22:12:59 UTC+0000                                 
0xffffbf0b9f7795c0 MpCmdRun.exe           4568   4528      5        0      0      0 2018-11-21 22:13:08 UTC+0000                                 
0xffffbf0b9f695080 WUDFHost.exe           2376    996      6        0      0      0 2018-11-22 12:05:39 UTC+0000                                 
0xffffbf0b9f8e4840 conhost.exe            2968   3668      0 --------      2      0 2018-11-22 12:25:08 UTC+0000                                 
0xffffbf0b9f29b840 RamCapture64.e         4004   3136      9        0      2      0 2018-11-22 12:25:46 UTC+0000                                 
0xffffbf0b9e90a840 conhost.exe            3828   4004      9        0      2      0 2018-11-22 12:25:46 UTC+0000</pre>
<p>Open ports on Windows</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true ">python vol.py -f /mnt/20181122.mem  --profile Win2016x64_14393 netscan | grep LISTEN
Volatility Foundation Volatility Framework 2.6
0xbf0b9ded6560     TCPv4    0.0.0.0:3389                   0.0.0.0:0            LISTENING        960      svchost.exe    2018-11-18 23:33:04 UTC+0000
0xbf0b9ded6970     TCPv4    0.0.0.0:3389                   0.0.0.0:0            LISTENING        960      svchost.exe    2018-11-18 23:33:04 UTC+0000
0xbf0b9ded6970     TCPv6    :::3389                        :::0                 LISTENING        960      svchost.exe    2018-11-18 23:33:04 UTC+0000
0xbf0b9e8e5320     TCPv4    0.0.0.0:49666                  0.0.0.0:0            LISTENING        968      svchost.exe    2018-11-18 23:33:05 UTC+0000
0xbf0b9e9bb710     TCPv4    0.0.0.0:49666                  0.0.0.0:0            LISTENING        968      svchost.exe    2018-11-18 23:33:05 UTC+0000
0xbf0b9e9bb710     TCPv6    :::49666                       :::0                 LISTENING        968      svchost.exe    2018-11-18 23:33:05 UTC+0000
0xbf0b9eb97a30     TCPv4    0.0.0.0:49667                  0.0.0.0:0            LISTENING        1580     spoolsv.exe    2018-11-18 23:33:05 UTC+0000
0xbf0b9eb97a30     TCPv6    :::49667                       :::0                 LISTENING        1580     spoolsv.exe    2018-11-18 23:33:05 UTC+0000
0xbf0b9eb988b0     TCPv4    0.0.0.0:49667                  0.0.0.0:0            LISTENING        1580     spoolsv.exe    2018-11-18 23:33:05 UTC+0000
0xbf0b9ebd9c80     TCPv4    0.0.0.0:445                    0.0.0.0:0            LISTENING        4        System         2018-11-18 23:33:05 UTC+0000
0xbf0b9ebd9c80     TCPv6    :::445                         :::0                 LISTENING        4        System         2018-11-18 23:33:05 UTC+0000
0xbf0b9ebf9010     TCPv4    0.0.0.0:5985                   0.0.0.0:0            LISTENING        4        System         2018-11-18 23:33:05 UTC+0000
0xbf0b9ebf9010     TCPv6    :::5985                        :::0                 LISTENING        4        System         2018-11-18 23:33:05 UTC+0000
0xbf0b9ec2dc00     TCPv4    0.0.0.0:47001                  0.0.0.0:0            LISTENING        4        System         2018-11-18 23:33:05 UTC+0000
0xbf0b9ec2dc00     TCPv6    :::47001                       :::0                 LISTENING        4        System         2018-11-18 23:33:05 UTC+0000
0xbf0b9ec33ec0     TCPv4    0.0.0.0:49669                  0.0.0.0:0            LISTENING        704      services.exe   2018-11-18 23:33:05 UTC+0000
0xbf0b9ec33ec0     TCPv6    :::49669                       :::0                 LISTENING        704      services.exe   2018-11-18 23:33:05 UTC+0000
0xbf0b9ec4a8c0     TCPv4    10.100.4.174:139               0.0.0.0:0            LISTENING        4        System         2018-11-18 23:33:04 UTC+0000
0xbf0b9ecf83e0     TCPv4    0.0.0.0:135                    0.0.0.0:0            LISTENING        836      svchost.exe    2018-11-18 23:33:03 UTC+0000
0xbf0b9ecfa8f0     TCPv4    0.0.0.0:135                    0.0.0.0:0            LISTENING        836      svchost.exe    2018-11-18 23:33:03 UTC+0000
0xbf0b9ecfa8f0     TCPv6    :::135                         :::0                 LISTENING        836      svchost.exe    2018-11-18 23:33:03 UTC+0000
0xbf0b9ed06ba0     TCPv4    0.0.0.0:49664                  0.0.0.0:0            LISTENING        608      wininit.exe    2018-11-18 23:33:03 UTC+0000
0xbf0b9ed07a30     TCPv4    0.0.0.0:49664                  0.0.0.0:0            LISTENING        608      wininit.exe    2018-11-18 23:33:03 UTC+0000
0xbf0b9ed07a30     TCPv6    :::49664                       :::0                 LISTENING        608      wininit.exe    2018-11-18 23:33:03 UTC+0000
0xbf0b9edd0c00     TCPv4    0.0.0.0:49665                  0.0.0.0:0            LISTENING        528      svchost.exe    2018-11-18 23:33:04 UTC+0000
0xbf0b9edd2a30     TCPv4    0.0.0.0:49665                  0.0.0.0:0            LISTENING        528      svchost.exe    2018-11-18 23:33:04 UTC+0000
0xbf0b9edd2a30     TCPv6    :::49665                       :::0                 LISTENING        528      svchost.exe    2018-11-18 23:33:04 UTC+0000
0xbf0b9ee908b0     TCPv4    0.0.0.0:49669                  0.0.0.0:0            LISTENING        704      services.exe   2018-11-18 23:33:05 UTC+0000
0xbf0b9f00ec40     TCPv4    0.0.0.0:49671                  0.0.0.0:0            LISTENING        712      lsass.exe      2018-11-18 23:33:13 UTC+0000
0xbf0b9f116c70     TCPv4    0.0.0.0:49671                  0.0.0.0:0            LISTENING        712      lsass.exe      2018-11-18 23:33:13 UTC+0000
0xbf0b9f116c70     TCPv6    :::49671                       :::0                 LISTENING        712      lsass.exe      2018-11-18 23:33:13 UTC+0000
0xd200000d6560     TCPv4    0.0.0.0:3389                   0.0.0.0:0            LISTENING        960      svchost.exe    2018-11-18 23:33:04 UTC+0000
0xd200000d6970     TCPv4    0.0.0.0:3389                   0.0.0.0:0            LISTENING        960      svchost.exe    2018-11-18 23:33:04 UTC+0000
0xd200000d6970     TCPv6    :::3389                        :::0                 LISTENING        960      svchost.exe    2018-11-18 23:33:04 UTC+0000</pre>
<p>Established connections on Windows</p>
<pre class="theme:dark-terminal toolbar-overlay:false nums:false nums-toggle:false expand-toggle:false lang:sh decode:true "> python vol.py -f /mnt/20181122.mem  --profile Win2016x64_14393 netscan | grep EST
Volatility Foundation Volatility Framework 2.6
0xbf0b9f385d00     TCPv4    10.100.4.174:3389              94.143.189.241:35347 ESTABLISHED      960      svchost.exe    2018-11-22 11:46:20 UTC+0000</pre>
<p>A whole range of other commands are supported, but remember, not all commands work under all circumstances, YMMV.</p>
<p>Now where is that coffee?</p>
<p>The post <a href="https://cloudar.be/awsblog/security-incident-be-prepared-memory-dumps/">Security Incident: Be Prepared &#8211; Memory Dumps</a> appeared first on <a href="https://cloudar.be">Cloudar</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://cloudar.be/awsblog/security-incident-be-prepared-memory-dumps/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Cloudar achieves AWS Premier Consulting Partner status</title>
		<link>https://cloudar.be/awsblog/cloudar-achieves-aws-premier-consulting-partner-status/</link>
					<comments>https://cloudar.be/awsblog/cloudar-achieves-aws-premier-consulting-partner-status/#respond</comments>
		
		<dc:creator><![CDATA[Bart Van Hecke]]></dc:creator>
		<pubDate>Fri, 02 Nov 2018 16:14:05 +0000</pubDate>
				<category><![CDATA[AWS Blog]]></category>
		<category><![CDATA[Amazon Web Services]]></category>
		<category><![CDATA[APN]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[MSP]]></category>
		<category><![CDATA[Partner Network]]></category>
		<category><![CDATA[Premier]]></category>
		<guid isPermaLink="false">https://cloudar.be/?p=8845</guid>

					<description><![CDATA[<p>The post <a href="https://cloudar.be/awsblog/cloudar-achieves-aws-premier-consulting-partner-status/">Cloudar achieves AWS Premier Consulting Partner status</a> appeared first on <a href="https://cloudar.be">Cloudar</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="wpb-content-wrapper"><div id="ut-row-69b9c1d0bdb64" data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_column-gap-0 ut-row-69b9c1d0bdb76" ><div class="wpb_column vc_column_container vc_col-sm-8" ><div id="ut_inner_column_69b9c1d0be5c5" class="vc_column-inner " ><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element" >
		<div class="wpb_wrapper">
			<p><strong>Kontich, Belgium, November 2nd, 2018 – </strong>Cloudar, a Belgian based AWS Consulting Partner, today announced it has achieved Premier Consulting Partner status within the Amazon Web Services (AWS) Partner Network (APN). This is the highest tier available and recognizes partners that have made significant investments in their AWS practice. Premier Partners have a proven experience in designing, deploying and migrating customer solutions on AWS, have a strong team of trained and certified technical professionals and drive a healthy revenue-generating consulting business on AWS.</p>
<p>Cloudar is the first AWS Premier Consulting Partner headquartered in Belgium.<br />
Cloudar has been part of the APN network since 2014 and has had a 100% focus on AWS since day one. This focus on one specific cloud allows Cloudar to excel and to have the best possible relationship with AWS.</p>
<p><strong>Tom De Blende, COO of Cloudar</strong>, commented on this achievement saying, <span style="color: #333333;"><em>“In a traditional hosting business, there usually is a gap between a customer and the engineers of the supplier. Within Cloudar, our Consultancy business strengthens our Managed Services business, and vice versa. Achieving AWS Premier Consulting Partner Status is a valuable recognition for all the hard work our consultants put in day after day.”</em></span></p>
<p>Innovation is in the DNA of Cloudar. Not only in the combination of different business models like reselling, consultancy, staffing ánd managed services but, as part of Cronos Groep, Cloudar is also involved in projects with different competence centers that are experts in their field. This results in a quick adoption of new services, from IoT to Serverless, AI to Big Data, Lex to Polly. This business model has proven to be successful thanks to a very customer-centric approach.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4" ><div id="ut_inner_column_69b9c1d0beba7" class="vc_column-inner " ><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element" >
		<div class="wpb_wrapper">
			<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-8851" src="https://cloudar.be/wp-content/uploads/2018/11/aws-premier-badge.png" alt="" width="404" height="741" srcset="https://cloudar.be/wp-content/uploads/2018/11/aws-premier-badge.png 404w, https://cloudar.be/wp-content/uploads/2018/11/aws-premier-badge-393x720.png 393w" sizes="auto, (max-width: 404px) 100vw, 404px" /></p>

		</div>
	</div>
</div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div id="ut-row-69b9c1d0bf708" data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_column-gap-0 ut-row-69b9c1d0bf718" ><div class="wpb_column vc_column_container vc_col-sm-12" ><div id="ut_inner_column_69b9c1d0bfeb2" class="vc_column-inner " ><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element" >
		<div class="wpb_wrapper">
			<p><em><span style="color: #333333;">“Our AWS Premier Consulting Partners are the very best of all APN Consulting Partners globally and we are excited to welcome Cloudar to this exclusive group,”</span></em> said <strong>Niko Mykkänen, General Manager, Alliances and Channels EMEA at AWS</strong>. <em><span style="color: #333333;">“By investing in their AWS skills Cloudar has been able to prove they have a strong bench of trained and certified consultants that are equipped to help customers through their digital transformation and into the AWS cloud.”</span></em></p>
<p>Cloudar has proven worthy of the new Premier Partner status through many successful customer engagements.<br />
<strong>Geert Vanvaerenbergh, CEO of Amista NV, SAP Rebels // Founder &amp; CEO of Belgium’s national bobsleigh team “the Belgian Bullets”</strong> confirms this result-driven approach: <span style="color: #333333;"><em>“My motto in life is that we should be part of the solution, not part of the problem. Cloudar’s team is the personification of my motto. For one of our biggest customers, Alcopa, we migrated their entire applications suite including a very significant SAP workload which is in my experience not the easiest thing to accomplish. The AWS migration went flawless, was delivered as promised; on budget and on time. This is why I love working with Cloudar on our most crucial customer missions, when failure is not an option.”</em></span></p>
<p>Early 2018, Cloudar achieved the <strong>AWS DevOps Competency</strong> and the <strong>AWS Government Competency</strong>. The AWS DevOps Competency highlights APN Partners who have deep experience working with businesses to help them implement continuous integration and continuous delivery practices or helping them automate infrastructure provisioning and management with configuration management tools on AWS. The AWS Government Competency highlights partners that provide solutions to government customers to deliver mission-critical workloads and applications on AWS.</p>
<p>These recent achievements and close collaboration with AWS served as a stepping stone to obtaining the Premier Partner Partnership. Cloudar is currently actively working on qualifying for the Managed Service Provider Competency, which will be another great milestone.</p>
<p><strong>Bart Van Hecke, Co-Founder and Managing Partner of Cloudar</strong> commented: <span style="color: #333333;"><em>&#8220;We are very proud to be recognised by AWS as a Premier Consulting Partner. In the near future we will continue to invest in this relationship with AWS by obtaining more AWS competencies and Specialties. I cannot emphasize enough the importance of the team-effort that resulted in this achievement. I take pride in the team&#8217;s expertise and professionalism and look forward to continue leading Cloudar into this exciting, ever-changing world of the AWS Cloud.”</em></span></p>

		</div>
	</div>
</div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><section id="ut-section-69b9c1d0c0efb" data-vc-full-width="true" data-vc-full-width-init="false" data-cursor-skin="global" class="vc_section ut-vc-160 vc_section-has-no-fill ut-section-69b9c1d0c0f09"><div id="ut-row-69b9c1d0c17e1" data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_column-gap-0 ut-row-69b9c1d0c17ef" ><div class="wpb_column vc_column_container vc_col-sm-12" ><div id="ut_inner_column_69b9c1d0c1ff6" class="vc_column-inner " ><div class="wpb_wrapper"><div class="vc_message_box vc_message_box-standard vc_message_box-rounded vc_color-info vc_do_message" ><div class="vc_message_box-icon"><i class="fas fa-info-circle"></i></div><p><strong>ABOUT CLOUDAR</strong></p>
<p>Cloudar was founded by Senne Vaeyens and Bart Van Hecke in 2014 with a 100% focus on Amazon Web Services.</p>
<p>As DevOps, AWS and infrastructure experts, Cloudar offers rock solid, high available and scalable solutions for any type of business in the AWS Public Cloud.</p>
<p>Being part of Cronos Groep( https://cronos-groep.be/en), Cloudar can offer their customers complete solutions that go beyond AWS expertise. With over 5,000 IT consultants, a 2017 revenue of 560M € and an average yearly growth rate of 15%, Cronos Groep has become one of the most solvent and trusted technology partners in Belgium and Luxemburg.</p>
<p>Cloudar is ISO/IEC 27001 certified for information security. ISO 27001 is the internationally recognized and respected standard that evaluates if a company is following information security best practices. This completely neutral standard applies an exacting, risk-based approach to determine the security of data in an organization, assessing IT structure, processes and people.</p>
<p>Cloudar has delivered dozens of agile, right-sized projects to customers across all industries, creating a well-architected core from which these organizations can operate and grow their journey in the AWS Public Cloud. For more information, please visit <a href="https://cloudar.be">https://www.cloudar.eu</a></p>
</div></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><a data-id="section-without-id" class="ut-vc-offset-anchor-bottom" name="section-without-id"></a></section><div class="vc_row-full-width vc_clearfix"></div>
</div><p>The post <a href="https://cloudar.be/awsblog/cloudar-achieves-aws-premier-consulting-partner-status/">Cloudar achieves AWS Premier Consulting Partner status</a> appeared first on <a href="https://cloudar.be">Cloudar</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://cloudar.be/awsblog/cloudar-achieves-aws-premier-consulting-partner-status/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Integrating Fail2Ban with AWS Network ACLs</title>
		<link>https://cloudar.be/awsblog/integrating-fail2ban-with-aws-network-acls/</link>
					<comments>https://cloudar.be/awsblog/integrating-fail2ban-with-aws-network-acls/#comments</comments>
		
		<dc:creator><![CDATA[Rutger Beyen]]></dc:creator>
		<pubDate>Fri, 05 Oct 2018 16:17:29 +0000</pubDate>
				<category><![CDATA[AWS Blog]]></category>
		<category><![CDATA[ACL]]></category>
		<category><![CDATA[Amazon]]></category>
		<category><![CDATA[Amazon Web Services]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[EC2]]></category>
		<category><![CDATA[fail2ban]]></category>
		<category><![CDATA[NACL]]></category>
		<guid isPermaLink="false">https://cloudar.be/?p=7782</guid>

					<description><![CDATA[<p>I was recently working on a project where I couldn&#8217;t lock down the Bastion instance security group ingress rule to only allow whitelisted IP addresses. Several coworkers work from home and use the Bastion to jump into backend servers and create SSH tunnels, while they did not have AWS Console access to whitelist themselves. The [&#8230;]</p>
<p>The post <a href="https://cloudar.be/awsblog/integrating-fail2ban-with-aws-network-acls/">Integrating Fail2Ban with AWS Network ACLs</a> appeared first on <a href="https://cloudar.be">Cloudar</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>I was recently working on a project where I couldn&#8217;t lock down the Bastion instance security group ingress rule to only allow whitelisted IP addresses. Several coworkers work from home and use the Bastion to jump into backend servers and create SSH tunnels, while they did not have AWS Console access to whitelist themselves. The security group ended up allowing 0.0.0.0/0 on port 22. Off course the Bastion instance would only allow ssh key based logins, but all protocols eventually have vulnerabilities. So installing multiple layers of security in depth prevents the systems from being directly affected by the latest individual vulnerabilities.</p>
<p>Combining some bits and pieces from Google allowed me to setup Fail2Ban on the Bastion instance, while the blocking of the IPs is done in AWS NACLs in stead of the local Iptables. The setup has been done on an AmazonLinux instance.</p>
<h1 id="PHC-Fail2BanwithAWSVPCACLs-Considerations">Considerations</h1>
<p>AWS NACLs by default only allow 20 ingress and 20 egress rules. This is a soft limit, and you can have it increased to 40 by opening a support case (40 seems to be the upper hard limit).</p>
<p>If fail2ban wants to add another rule while the maximum has been reached, it will block the offender in the local Iptables. Running my implementation in several environments for a few weeks now, I never had more than 4 to 5 IPs blocked at the same time. Unless you are off course the victim of a targetted ddos attack&#8230;</p>
<h1></h1>
<h1 id="PHC-Fail2BanwithAWSVPCACLs-AWSCLI">Setup the AWS CLI</h1>
<p>Make sure you have a correctly installed and working AWS CLI on the instance. Also make sure the EC2 role has the necessary permissions to modify the EC2 Network ACL. Testing it out:</p>
<p>1. Get the current MAC address of the first interface</p>
<div class="code panel pdl conf-macro output-block" data-hasbody="true" data-macro-name="code" data-macro-id="ce12476d-5e7d-4396-9bc8-2932ba849666">
<div class="codeContent panelContent pdl">
<div>
<pre class="lang:sh decode:true">INTERFACE=$(curl --silent http://169.254.169.254/latest/meta-data/network/interfaces/macs/)</pre>
</div>
</div>
</div>
<p>2. Get the subnet ID from the MAC address</p>
<pre class="lang:sh decode:true">SUBNET_ID=$(curl --silent http://169.254.169.254/latest/meta-data/network/interfaces/macs/${INTERFACE}/subnet-id)</pre>
<p>3. Get the current Network ACL ID</p>
<div class="code panel pdl conf-macro output-block" data-hasbody="true" data-macro-name="code" data-macro-id="25f89c85-841c-4adf-a6e2-0a9574e3a9ce">
<div class="codeContent panelContent pdl">
<div>
<div id="highlighter_946062" class="syntaxhighlighter sh-rdark nogutter bash">
<pre class="lang:sh decode:true">ACL_ID=$(aws ec2 describe-network-acls --filters Name=association.subnet-id,Values=$SUBNET_ID | jq '.NetworkAcls[0].Associations[0].NetworkAclId' | sed 's/"//g')</pre>
</div>
</div>
</div>
</div>
<p class="auto-cursor-target">4. Test that you can add a rule to the ACL</p>
<pre class="lang:default decode:true ">aws ec2 create-network-acl-entry --network-acl-id $ACL_ID --ingress --rule-number 1 --protocol tcp --port-range From=0,To=65535 --cidr-block 1.2.3.4/32 --rule-action deny</pre>
<p class="auto-cursor-target">5. Verify that the above IP has been blocked</p>
<div class="code panel pdl conf-macro output-block" data-hasbody="true" data-macro-name="code" data-macro-id="5683dd33-6c2b-465b-bcca-efa71da5478e">
<div class="codeContent panelContent pdl">
<div>
<div id="highlighter_116482" class="syntaxhighlighter sh-rdark nogutter bash">
<pre class="lang:sh decode:true">aws ec2 describe-network-acls --filters Name=association.network-acl-id,Values=$ACL_ID</pre>
</div>
</div>
</div>
</div>
<p class="auto-cursor-target">6. Remove the rule again</p>
<div class="code panel pdl conf-macro output-block" data-hasbody="true" data-macro-name="code" data-macro-id="17968341-e5e1-4ca9-a325-d7eb94d91883">
<div class="codeContent panelContent pdl">
<div>
<div id="highlighter_461413" class="syntaxhighlighter sh-rdark nogutter bash">
<pre class="lang:sh decode:true">aws ec2 delete-network-acl-entry --network-acl-id $ACL_ID --ingress --rule-number 1</pre>
</div>
</div>
</div>
</div>
<h1></h1>
<h1 id="PHC-Fail2BanwithAWSVPCACLs-Fail2BanAWSintegration">Fail2Ban AWS integration</h1>
<p class="auto-cursor-target">1. Install the necessary packages (if not yet present)</p>
<pre class="lang:sh decode:true ">pip install requests boto3 tabulate
yum install sqlite</pre>
<p class="auto-cursor-target">2. Create a directory to store the AWS NACL script and cd to it</p>
<pre class="lang:sh decode:true ">mkdir /opt/aws-nacl
cd /opt/aws-nacl</pre>
<p>3. Place the following content in the file aws_nacl.py in the above directory</p>
<pre class="height:50 minimize:true lang:python decode:true">"""This script is used to block and unblock IPs on Amazon EC2
network ACLs and can be used with Fail2Ban. Since only 20
inbound rules are allowed with AWS if a 'jail' is provided
the IP will be blocked on the host iptables if full"""
 
import json
import sqlite3
import argparse
import os
import pprint
import socket
import logging
import logging.handlers
import requests
import boto3
import subprocess
from tabulate import tabulate
 
#Constants
#AWS only allows 20 inbound subtract default ACL rules from 20 for max
MAX_BLOCKS = 20
#Set rule start, by default AWS ACL starts rules at 100
RULE_BASE = 1
#Set range for rules: highest rule ID that can be used
RULE_RANGE = 50
 
 
def check_block(ip,acl):
   acl = get_acl(acl)
   list = acl['NetworkAcls'][0]['Entries']
   for entry in list:
      if ip in entry["CidrBlock"]:
         return True
   return False
 
 
def get_acl(acl_id):
    """This function gets the ACL given an ec2 object and ACL id"""
    ec2 = boto3.client('ec2')
    acl_response = ec2.describe_network_acls(
        NetworkAclIds=[
            acl_id,
        ],
    )
    return acl_response
 
def print_inbound_acl(acl_id):
   blocks = []
   table = {num:name[8:] for name,num in vars(socket).items() if name.startswith("IPPROTO")}
   acl = get_acl(acl_id)
   list = acl['NetworkAcls'][0]['Entries']
   for entry in list:
     if not entry["Egress"]:
         if "PortRange" in entry:
                ports = ({"To":entry["PortRange"]["To"], "From":entry["PortRange"]["From"]})
         else:
                ports = ({"To":"", "From":""})
         if entry['Protocol'] == "-1":
                proto = "all"
         else:
                proto = table[int (entry['Protocol'])]
         blocks.append([entry['RuleNumber'],proto,entry['CidrBlock'],ports["To"],ports["From"],entry['RuleAction']])
   print "Inbound Network ACL"
   print tabulate(blocks,headers=["Rule","Protocol","CIDR","Port From","Port To","Action"])
 
def is_acl(acl):
    ec2 = boto3.client('ec2')
    try:
        ec2.describe_network_acls(
            NetworkAclIds=[
                acl,
            ],
        )
        return True
    except Exception:
        return False
 
def get_acl_id():
    ec2 = boto3.client('ec2')
    meta = "http://169.254.169.254/latest/meta-data/network/interfaces/macs/"
    mac = requests.get(meta).text
    subnet = requests.get(meta+mac+"/subnet-id").text
 
    response = ec2.describe_network_acls(
        Filters=[
            {
                'Name': 'association.subnet-id',
                'Values':[
                    subnet
                ]
            },
        ],
        DryRun=False
    )
    return response['NetworkAcls'][0]['Associations'][0]['NetworkAclId']
 
def validate_ip(ip_address):
    ip_split = ip_address.split('.')
    if len(ip_split) != 4:
        return False
    for octet in ip_split:
        if not octet.isdigit():
            return False
        octet_int = int(octet)
        if octet_int &lt; 0 or octet_int &gt; 255:
            return False
    try:
        socket.inet_aton(ip_address)
        return True
    except socket.error:
        return False
 
def sqlite_connect(file_name):
    make_table = '''CREATE TABLE if not exists blocks (id integer PRIMARY KEY AUTOINCREMENT,
               ip text NOT NULL, acl text NOT NULL, blocked boolean NOT NULL,host boolean
               NOT NULL)'''
    if not os.path.isfile(file_name):
        dir_path = os.path.dirname(os.path.realpath(__file__))
        conn = sqlite3.connect("{}/{}".format(dir_path, file_name))
        cursor = conn.cursor()
        cursor.execute(make_table)
        conn.commit()
    else:
        try:
            conn = sqlite3.connect(file_name)
            cursor = conn.cursor()
            cursor.execute(make_table)
            conn.commit()
        except Exception:
            print "Datatbase File is encrypted or is not a database"
            exit(1)
    return conn
 
 
def main():
    logging.basicConfig(level=logging.ERROR)
    my_logger = logging.getLogger(__file__)
    my_logger.info('Checking arguments')
    parser = argparse.ArgumentParser(description="Script to block IPs on AWS EC2 Network ACL")
    parser.add_argument('-a', '--acl', help='ACL ID')
    parser.add_argument('-j', '--jail', help='Fail2Ban Jail')
    parser.add_argument('-d', '--db', default='aws-nacl.db', help='Database')
    parser.add_argument('-b', '--block', metavar="IP", help='Block IP address')
    parser.add_argument('-u', '--unblock', metavar="IP", help='Unblock IP address')
    parser.add_argument('-g', '--get', action='store_true', help='Get ACL')
    parser.add_argument('-v', '--verbose', action='store_true', help='Verbose logging')
    args = parser.parse_args()
 
    ec2_resource = boto3.resource('ec2')
    pretty_printer = pprint.PrettyPrinter(indent=4)
 
    if args.verbose:
        my_logger.info('Setting logging to debug')
        my_logger.setLevel(logging.DEBUG)
 
    if (args.block and args.unblock):
        my_logger.error('Invalid arguments')
        parser.print_usage()
        exit(1)
 
    if args.acl:
        my_logger.info('Checking if valid AWS Network ACL')
        if not is_acl(args.acl):
            print('Invalid Network ACL ID')
            my_logger.error('Invalid Network ACL')
            exit(1)
    else:
        my_logger.info('Searching for current ACL ID')
        acl = get_acl_id()
        network_acl = ec2_resource.NetworkAcl(acl)
        my_logger.debug('Network ACL ID: {}'.format(network_acl))
 
    if args.get or (not args.block and not args.unblock):
        my_logger.info('Printing ACL')
        #pretty_printer.pprint(get_acl(acl)['NetworkAcls'][0]['Entries'])
        print_inbound_acl(acl)
        exit(0)
 
 
    my_logger.info('Configuring DB')
    conn = sqlite_connect(args.db)
    cursor = conn.cursor()
 
    if args.block:
        my_logger.info('Checking if valid IP')
        if not validate_ip(args.block):
            print "IP {} is invalid".format(args.ip)
            exit(1)
        my_logger.info('Searching DB for IP: {}'.format(args.block))
        cursor.execute('''select count (*) from blocks where ip=? and blocked=1''', (args.block,))
        if cursor.fetchone()[0] &gt; 0:
            print "IP {} already blocked".format(args.block)
            exit(0)
        my_logger.info('Checking AWS block count')
        cursor.execute('''select count (*) from blocks where blocked=1 and host =0''')
        block_count = cursor.fetchone()[0]
        my_logger.debug('Currently {} IPs blocked'.format(block_count))
        if block_count &lt;= MAX_BLOCKS:
            my_logger.debug('Current blocks less then Max: {}'.format(MAX_BLOCKS))
            my_logger.info('Adding block to the DB')
            cursor.execute('''insert into blocks (ip, acl, blocked,host)
                               values (?,?,?,?)''', (args.block, acl, 1, 0))
            conn.commit()
            my_logger.info('Caculating Rule number based on DB ID')
            cursor.execute('''select seq from sqlite_sequence where name="blocks"''')
            rule_num = cursor.fetchone()[0] % RULE_RANGE + RULE_BASE
            my_logger.info('Adding Network ACL')
            network_acl.create_entry(
                CidrBlock=args.block+'/32',
                DryRun=False,
                Egress=False,
                PortRange={
                    'From': 0,
                    'To': 65535
                },
                Protocol='-1',
                RuleAction='deny',
                RuleNumber=rule_num
            )
            if not check_block(args.block, acl):
               my_logger.error('Failed to block IP {} in AWS ACL'.format(args.block))
               cursor.execute('''UPDATE blocks SET blocked = 0 where ip=? and
                              blocked=1''', (args.block,))
               conn.commit()
        else:
            my_logger.debug('Max blocks on AWS Network ACL, checking for IPTables')
            if  args.jail:
                my_logger.info('Blocking IP {} in f2b-{}'.format(args.block,args.jail))
                iptables = "/sbin/iptables -w -I {} 1 -s {} -j REJECT".format(args.jail, args.block)
                print iptables
                subprocess.call(iptables, shell=True)
                cursor.execute('''insert into blocks (ip, acl, blocked,host)
                                  values (?,?,?,?)''', (args.block, '', 1, 1))
                conn.commit()
            else:
                my_logger.error('No IPtables Chain set, IP will not be blocked')
    if args.unblock:
        my_logger.info('Checking if valid IP')
        if not validate_ip(args.unblock):
            my_logger.error("IP {} is invalid".format(args.unblock))
            exit(1)
        my_logger.info('Checking for IP in the DB')
        test = 'select id, host from blocks where ip="{}" and blocked=1'.format(args.unblock)
        cursor.execute(test)
        results = cursor.fetchone()
        if results is not None:
            my_logger.info('Found IP, getting rule number from DB')
            if results[1] == 0:
                rule_num = results[0] % RULE_RANGE + RULE_BASE
                my_logger.debug('Rule number is {}'.format(rule_num))
                my_logger.info('Deleting rule from AWS Network ACL')
                response = network_acl.delete_entry(
                    DryRun=False,
                    Egress=False,
                    RuleNumber=rule_num
                )
                my_logger.info('Updating DB')
                cursor.execute('''UPDATE blocks SET blocked = 0 where ip=? and
                                   blocked=1''', (args.unblock,))
                conn.commit()
            else:
                if args.jail:
                    my_logger.info('Unblocking IP {} in f2b-{}'.format(args.unblock,args.jail))
                    iptables = 'iptables -w -D {} -s {} -j REJECT'.format(args.jail, args.unblock)
                    subprocess.call(iptables, shell=True)
                    cursor.execute('''UPDATE blocks SET blocked = 0 where ip=? and blocked=1''', (args.unblock,))
                    conn.commit()
        else:
            my_logger.error("IP {} not in blocks database".format(args.unblock))
            exit(1)
 
 
if __name__ == "__main__":
    main()</pre>
<p class="auto-cursor-target">4. Test the script by calling it</p>
<pre class="lang:sh decode:true">python aws_nacl.py -d aws-nacl.db -b 1.2.3.4 -v</pre>
<p class="auto-cursor-target">5. Verify that the above IP was added to the ACL</p>
<pre class="lang:sh decode:true ">python aws_nacl.py -g</pre>
<p class="auto-cursor-target">6. Remove the IP again</p>
<pre class="lang:sh decode:true ">python aws_nacl.py -d aws-nacl.db -u 1.2.3.4 -v</pre>
<p class="auto-cursor-target">7. Verify that the IP was removed again</p>
<pre class="lang:sh decode:true">python aws_nacl.py -g</pre>
<h1></h1>
<h1 id="PHC-Fail2BanwithAWSVPCACLs-Fail2Ban">Fail2Ban</h1>
<p>1. Install Fail2Ban itself</p>
<pre class="lang:sh decode:true">yum install fail2ban</pre>
<p class="auto-cursor-target">2. Place the following file under /etc/fail2ban/action.d/aws.conf</p>
<div class="code panel pdl conf-macro output-block" data-hasbody="true" data-macro-name="code" data-macro-id="546f63e6-c40c-46fc-aefa-ffe8445a881a">
<div class="codeHeader panelHeader pdl hide-border-bottom">
<pre class="minimize:true lang:default decode:true"># Fail2Ban configuration file
#
# Author: Cyril Jaquier
# Modified by Yaroslav Halchenko for multiport banning
# Modified by Ryan for AWS Network ACL block
#
 
[INCLUDES]
 
before = iptables-blocktype.conf
 
[Definition]
 
# Option:  actionstart
# Notes.:  command executed once at the start of Fail2Ban.
# Values:  CMD
#
actionstart = iptables -N fail2ban-&lt;name&gt;
              iptables -A fail2ban-&lt;name&gt; -j RETURN
              iptables -I &lt;chain&gt; -p &lt;protocol&gt; --dport &lt;port&gt; -j fail2ban-&lt;name&gt;
 
# Option:  actionstop
# Notes.:  command executed once at the end of Fail2Ban
# Values:  CMD
#
actionstop = iptables -D &lt;chain&gt; -p &lt;protocol&gt; --dport &lt;port&gt; -j fail2ban-&lt;name&gt;
             iptables -F fail2ban-&lt;name&gt;
             iptables -X fail2ban-&lt;name&gt;
 
# Option:  actioncheck
# Notes.:  command executed once before each actionban command
# Values:  CMD
#
actioncheck = iptables -n -L &lt;chain&gt; | grep -q 'fail2ban-&lt;name&gt;[ \t]'
 
# Option:  actionban
# Notes.:  command executed when banning an IP. Take care that the
#          command is executed with Fail2Ban user rights.
# Tags:    See jail.conf(5) man page
# Values:  CMD
#
actionban = python /opt/aws-nacl/aws_nacl.py -d /opt/aws-nacl/aws-nacl.db -v -b &lt;ip&gt; -j fail2ban-&lt;name&gt;
# Option:  actionunban
# Notes.:  command executed when unbanning an IP. Take care that the
#          command is executed with Fail2Ban user rights.
# Tags:    See jail.conf(5) man page
# Values:  CMD
#
actionunban = python /opt/aws-nacl/aws_nacl.py -d /opt/aws-nacl/aws-nacl.db -v -u &lt;ip&gt; -j fail2ban-&lt;name&gt;
 
[Init]
 
# Default name of the chain
#
name = default
 
# Option:  port
# Notes.:  specifies port to monitor
# Values:  [ NUM | STRING ]  Default:
#
port = ssh
 
# Option:  protocol
# Notes.:  internally used by config reader for interpolations.
# Values:  [ tcp | udp | icmp | all ] Default: tcp
#
protocol = tcp
 
# Option:  chain
# Notes    specifies the iptables chain to which the fail2ban rules should be
#          added
# Values:  STRING  Default: INPUT
chain = INPUT</pre>
<p>3. Modify /etc/fail2ban/fail2ban.conf and change the logfile location (it&#8217;s easier to have a separate log rather then searching through /var/log/messages)</p>
</div>
<div>
<pre class="lang:default decode:true">logtarget = /var/log/fail2ban.log</pre>
</div>
</div>
<p class="auto-cursor-target">4. Add a file /etc/fail2ban/jail.local with the following content. Modify the values at your own convenience</p>
<pre class="lang:default decode:true">[DEFAULT]
#Localhost and your office HQ range
ignoreip = 127.0.0.1/8 10.10.10.0/24

[ssh-iptables]
action = aws[name=SSH, port=ssh, protocol=tcp]

# "bantime" is the number of seconds that a host is banned.
bantime  = 3600

# "maxretry" is the number of failures before a host get banned.
maxretry = 2</pre>
<p class="auto-cursor-target">5. Check /etc/fail2ban/filter.d/sshd.conf that the correct matching patterns for the /var/log/secure logfile are present, so that it actually looks for the loglines that you want to be considered as malicious. On an Amazon Linux, I have the following in place</p>
<pre class="lang:default decode:true">failregex = ^%(__prefix_line)s(?:error: PAM: )?[aA]uthentication (?:failure|error) for .* from &lt;HOST&gt;( via \S+)?\s*$
            ^%(__prefix_line)s(?:error: PAM: )?User not known to the underlying authentication module for .* from &lt;HOST&gt;\s*$
            ^%(__prefix_line)sFailed \S+ for .* from &lt;HOST&gt;(?: port \d*)?(?: ssh\d*)?\s*$
            ^%(__prefix_line)sROOT LOGIN REFUSED.* FROM &lt;HOST&gt;\s*$
            ^%(__prefix_line)s[iI](?:llegal|nvalid) user .* from &lt;HOST&gt; .*$
            ^%(__prefix_line)sUser .+ from &lt;HOST&gt; not allowed because not listed in AllowUsers\s*$
            ^%(__prefix_line)sUser .+ from &lt;HOST&gt; not allowed because listed in DenyUsers\s*$
            ^%(__prefix_line)sUser .+ from &lt;HOST&gt; not allowed because not in any group\s*$
            ^%(__prefix_line)srefused connect from \S+ \(&lt;HOST&gt;\)\s*$
            ^%(__prefix_line)sUser .+ from &lt;HOST&gt; not allowed because a group is listed in DenyGroups\s*$
            ^%(__prefix_line)sUser .+ from &lt;HOST&gt; not allowed because none of user's groups are listed in AllowGroups\s*$
            ^%(__prefix_line)sReceived disconnect from &lt;HOST&gt; port \d*:11: Bye Bye \[preauth\]</pre>
<p class="auto-cursor-target">6. Add fail2ban to the startup list and start it</p>
<pre class="lang:sh decode:true ">chkconfig --add fail2ban
chkconfig fail2ban on
service fail2ban start</pre>
<h1></h1>
<h1 id="PHC-Fail2BanwithAWSVPCACLs-Lockedmyselfout">Locked myself out</h1>
<p>Things can always go wrong, but thanks to the recently released AWS feature called &#8216;SSM Session Manager&#8217; you can always get a console window on your instance to start troubleshooting. One thing to make sure is that your instance is running the latest version of the AWS SSM Agent. So it&#8217;s always a good idea to update it before closing your current SSH session:</p>
<pre class="lang:sh decode:true ">yum install https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm</pre>
<p>You also need to make sure that your instance is allowed to communicate with the AWS SSM service. Easiest way is to attach the &#8216;<span class="awsui-tooltip awsui-tooltip-no-slide awsui-tooltip-right awsui-tooltip-rounded awsui-tooltip-size-auto"><span class="ng-scope policy-name-with-icon">AmazonEC2RoleforSSM</span></span>&#8216; policy to your EC2 role.</p>
<h1></h1>
<h1 id="PHC-Fail2BanwithAWSVPCACLs-Credits">Credits</h1>
<p><a class="external-link" href="https://github.com/TheRemover/fail2ban-aws-nacl" rel="nofollow">https://github.com/TheRemover/fail2ban-aws-nacl</a></p>
<p><a class="external-link" href="https://techbytesecurity.com/2017/06/fail2ban-with-aws-network-acl" rel="nofollow">https://techbytesecurity.com/2017/06/fail2ban-with-aws-network-acl</a></p>
<p><a class="external-link" href="https://www.google.com" rel="nofollow">https://www.google.com</a></p>
<p>The post <a href="https://cloudar.be/awsblog/integrating-fail2ban-with-aws-network-acls/">Integrating Fail2Ban with AWS Network ACLs</a> appeared first on <a href="https://cloudar.be">Cloudar</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://cloudar.be/awsblog/integrating-fail2ban-with-aws-network-acls/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Windows servers patching with AWS EC2 Systems Manager</title>
		<link>https://cloudar.be/awsblog/windows-servers-patching-with-aws-ec2-systems-manager/</link>
					<comments>https://cloudar.be/awsblog/windows-servers-patching-with-aws-ec2-systems-manager/#comments</comments>
		
		<dc:creator><![CDATA[Rutger Beyen]]></dc:creator>
		<pubDate>Mon, 29 May 2017 11:39:09 +0000</pubDate>
				<category><![CDATA[AWS Blog]]></category>
		<category><![CDATA[Amazon]]></category>
		<category><![CDATA[Amazon Web Services]]></category>
		<category><![CDATA[automation]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[EC2]]></category>
		<category><![CDATA[Run Command]]></category>
		<category><![CDATA[Systems Manager Services]]></category>
		<category><![CDATA[Windows Updates]]></category>
		<guid isPermaLink="false">https://cloudar.be/?p=3733</guid>

					<description><![CDATA[<p>&#160; Amazon EC2 Systems Manager is a collection of capabilities that helps you automate management tasks such as collecting system inventory, applying operating system patches, automating the creation of Amazon Machine Images (AMIs), and configuring operating systems and applications at scale. It is available at no cost to manage both your EC2 and on-premises resources! [&#8230;]</p>
<p>The post <a href="https://cloudar.be/awsblog/windows-servers-patching-with-aws-ec2-systems-manager/">Windows servers patching with AWS EC2 Systems Manager</a> appeared first on <a href="https://cloudar.be">Cloudar</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>&nbsp;</p>
<p>Amazon EC2 Systems Manager is a collection of capabilities that helps you automate management tasks such as collecting system inventory, applying operating system patches, automating the creation of Amazon Machine Images (AMIs), and configuring operating systems and applications at scale. It is available at no cost to manage both your EC2 and on-premises resources!</p>
<p>Amazon EC2 Systems Manager relies on the Amazon Simple Systems Management Service (SSM) agent being installed on the guests. The SSM agent is pre-installed on Windows Server 2016 instances or Windows Server 2003-2012 R2 instances created from AMI’s published after November 2016. You need at least SSM agent version 2.0.599.0 installed on the target EC2 instance.</p>
<p>In this article we will focus on using Systems Manager to apply Windows Updates to EC2 instances. Patch Management is always an operational pain point so its welcome that AWS offers a solution.</p>
<p>You start by creating groups of instances by applying a tag called &#8216;Patch Group&#8217;. Then you create a group of patches by forming a patch baseline containing and excluding the patches you require (or use the AWS default patch baseline). At last you create a maintenance window to have your patch baseline attached and applied to a patch group. The actual &#8216;Patch Now&#8217; run-command is nothing more than an API call, so there&#8217;s no obligation to use Maintenance Windows. Personally I&#8217;m a fan of Rundeck, so I&#8217;ll show you how to have the patches applied to the instances using both methods.</p>
<h2 id="WindowsServerspactchingwithAWSEC2SystemsManager-Configureyourinstances">Configure your instances</h2>
<p>The guest SSM agent setting inside with Windows OS requires permissions to connect to AWS EC2 Systems Manager. We grant these rights by creating an EC2 Service Role with the policy document ‘AmazonEC2RoleforSSM’ attached. Then you can attach this role to your instances. The instance also needs outbound internet connection to be able to connect to SSM. This can be either through an Internet Gateway or a NAT Gateway (or NAT Instance).</p>
<p>If you have this done right, your instance(s) should pop-up under &#8216;Managed Instances&#8217; in the EC2 console:</p>
<p><a href="https://cloudar.be/wp-content/uploads/2017/05/managed_instance-1.jpg"><img loading="lazy" decoding="async" class="alignnone wp-image-3735" src="https://cloudar.be/wp-content/uploads/2017/05/managed_instance-1.jpg" alt="" width="882" height="130" /></a></p>
<p>Take note of the SSM Agent Version. As said earlier it must be at least version 2.0.599.0. The Systems Manager Service also requires a &#8220;Patch Group&#8221;-tag on the EC2 instance. The key for a patch group tag must be <strong>Patch Group</strong>. Note that the key is case sensitive. The value can be anything you want to specify, but the key must be <strong>Patch Group.</strong></p>
<p><a href="https://cloudar.be/wp-content/uploads/2017/05/tags.jpg"><img loading="lazy" decoding="async" class="alignnone wp-image-3736" src="https://cloudar.be/wp-content/uploads/2017/05/tags.jpg" alt="" width="796" height="148" /></a></p>
<p>If done correctly, your tag will be picked up by SSM. You can confirm this on the &#8216;Managed Instances&#8217; page:</p>
<p><a href="https://cloudar.be/wp-content/uploads/2017/05/ssm_status.jpg"><img loading="lazy" decoding="async" class="alignnone wp-image-3737" src="https://cloudar.be/wp-content/uploads/2017/05/ssm_status.jpg" alt="" width="792" height="263" /></a></p>
<p>&nbsp;</p>
<h2 id="WindowsServerspactchingwithAWSEC2SystemsManager-PatchBaselines">Patch Baselines</h2>
<p>AWS provides a default Patch Baseline called &#8216;AWS-DefaultPatchBaseline&#8217;. It auto-approves all critical and security updates with a &#8216;critical&#8217; or &#8216;important&#8217; classification seven days after they have been released by Microsoft. If you&#8217;re happy with that you can use this baseline. If you&#8217;re not, you can simply create your own according to your requirements: set approval for specific products and patch classifications, exclude a specific KB etc</p>
<p><a href="https://cloudar.be/wp-content/uploads/2017/05/baseline.png"><img loading="lazy" decoding="async" class="alignnone wp-image-3738" src="https://cloudar.be/wp-content/uploads/2017/05/baseline.png" alt="" width="704" height="396" /></a></p>
<p>Once your happy with your baseline, you can hit &#8216;Create&#8217;. Now assign it to one or more Patch Groups (or make it the default baseline and throw away the AWS one). Hit the &#8216;actions&#8217; menu and chose &#8216;Modify Patch Groups&#8217;</p>
<p><a href="https://cloudar.be/wp-content/uploads/2017/05/patchbaseline.jpg"><img loading="lazy" decoding="async" class="alignnone wp-image-3739" src="https://cloudar.be/wp-content/uploads/2017/05/patchbaseline.jpg" alt="" width="463" height="152" /></a></p>
<p>Type the names of the Patch Groups you defined when tagging your instances</p>
<p><a href="https://cloudar.be/wp-content/uploads/2017/05/modifyPatchGroup.jpg"><img loading="lazy" decoding="async" class="alignnone wp-image-3740" src="https://cloudar.be/wp-content/uploads/2017/05/modifyPatchGroup.jpg" alt="" width="486" height="254" /></a></p>
<p>Your baseline is now attached to the specified patch groups. You can now start evaluating your instances against the baseline, and update them accordingly.</p>
<h2 id="WindowsServerspactchingwithAWSEC2SystemsManager-Patching">Patching</h2>
<p>Applying the patch baseline to a specific instance or to a patch group is nothing more than executing an AWS SSM run command. You can schedule this run command through AWS SSM &#8216;Maintenance Windows&#8217;, a cron job on a server (like Rundeck) or manual through the AWS Console.</p>
<p>Let&#8217;s first check everything manually. In the AWS EC2 console, go to &#8216;Run Commands&#8217; and create a new Run Command. Select the &#8216;AWS-ApplyPatchBaseline&#8217; command document and pick an instance run this on. For the &#8216;operation&#8217;, choose &#8216;Scan&#8217;. This will evaluate the instance against the baseline without installing anything yet.</p>
<p><a href="https://cloudar.be/wp-content/uploads/2017/05/ApplyPB.png"><img loading="lazy" decoding="async" class="alignnone wp-image-3741" src="https://cloudar.be/wp-content/uploads/2017/05/ApplyPB.png" alt="" width="609" height="381" /></a></p>
<p>Once the run command finishes, you can go back to the &#8216;Managed Instances&#8217; page. Highlight the instance(s) on which the run command was executed and click on the &#8216;Patch&#8217; tab. Here you can see the result of the scan:</p>
<p><a href="https://cloudar.be/wp-content/uploads/2017/05/patch_status.jpg"><img loading="lazy" decoding="async" class="alignnone wp-image-3742" src="https://cloudar.be/wp-content/uploads/2017/05/patch_status.jpg" alt="" width="719" height="212" /></a></p>
<p>To actually install the missing updates, execute the same run command document, but now with the &#8216;Install&#8217; operation. This will install the missing KBs to the instances and reboot them if needed.</p>
<p>Or execute the following aws cli command to accomplish the same:</p>
<pre class="lang:sh decode:true">aws ssm send-command --targets "Key=tag:Patch Group,Values=&lt;PatchGroupName&gt;" --document-name "AWS-ApplyPatchBaseline" --comment "Install|Check Windows Updates" --parameters Operation="&lt;Install|Scan&gt;"</pre>
<h2>Maintenance Windows</h2>
<p>In stead of manually starting a run command or cron job, we can also use the AWS provided Maintenance Windows feature. Systems Manager Maintenance Windows let you define a schedule for when to perform actions on your instances such as patching the operating system. Each Maintenance Window has a schedule, a duration, a set of registered targets, and a set of registered tasks.</p>
<p>Before actually creating a Maintenance Window, we must configure a Maintenance Window role. We need this so Systems Manager can execute tasks in Maintenance Windows on our behalf. So we go to the IAM page and create a new role. We pick an &#8220;EC2 service role&#8221; type and make sure to attach the &#8220;AmazonSSMMaintenanceWindowRole&#8221; policy to it. Once the role is created, we must modify it. Click &#8220;edit Trust Relationships&#8221;. Add a comma after &#8220;ec2.amazonaws.com&#8221;<b>,</b> and then add &#8220;Service&#8221;: &#8220;ssm.amazonaws.com&#8221; to the existing policy:</p>
<pre class="lang:default decode:true">{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com",
        "Service": "ssm.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}</pre>
<p>Back to SSM now to actually create the Maintenance Window. Give it a useful name and specify your preferred schedule. I&#8217;m setting &#8216;every 30 minutes&#8217; just for demonstration purposes, but in a real setup you would most probably choose something like &#8216;Every  Sunday&#8217;. You can also configure your own Cron expression.</p>
<p><a href="https://cloudar.be/wp-content/uploads/2017/05/createMX.jpg"><img loading="lazy" decoding="async" class="alignnone wp-image-3744" src="https://cloudar.be/wp-content/uploads/2017/05/createMX.jpg" alt="" width="490" height="415" /></a></p>
<p>This leaves us now with an empty Maintenance Window: there are no tasks nor targets associated yet.</p>
<p>To assign targets to the Maintenance Window, click on the &#8220;Register new targets&#8221; button on the &#8220;Targets&#8221; tab. We dynamically select the targets by using the &#8220;Patch Group&#8221; tag.</p>
<p><a href="https://cloudar.be/wp-content/uploads/2017/05/register_target.jpg"><img loading="lazy" decoding="async" class="alignnone wp-image-3745" src="https://cloudar.be/wp-content/uploads/2017/05/register_target.jpg" alt="" width="615" height="281" /></a></p>
<p>We will now have an ID linked to our &#8220;dev&#8221; Patch Group. This &#8220;Window Target ID&#8221; is used in the next step.</p>
<p><a href="https://cloudar.be/wp-content/uploads/2017/05/targets.jpg"><img loading="lazy" decoding="async" class="alignnone wp-image-3747" src="https://cloudar.be/wp-content/uploads/2017/05/targets.jpg" alt="" width="481" height="183" /></a></p>
<p>From the &#8220;tasks&#8221; tab of the Maintenance Window, click on &#8220;Schedule new task&#8221;. Pick the &#8220;AWS-ApplyPatchBaseline&#8221; document. Under &#8220;Registered Targets&#8221;, select the correct Window Target ID. For the operation, select &#8220;Install&#8221;. For the &#8220;Role&#8221;, select the IAM role with the AmazonSSMMaintenanceWindowRole attached to it (the one we created earlier). Set your preferred concurrency level and register the task by clicking on the blue button. The end result should look like this:</p>
<p><a href="https://cloudar.be/wp-content/uploads/2017/05/task.jpg"><img loading="lazy" decoding="async" class="alignnone wp-image-3748" src="https://cloudar.be/wp-content/uploads/2017/05/task.jpg" alt="" width="636" height="539" /></a></p>
<p>Now we have to wait for the schedule of the Maintenance Window. In this example we specified &#8216;every 30 minutes&#8217; as a schedule, so the waiting shouldn&#8217;t take too long. Under the &#8216;History&#8217; tab of the Maintenance Window you can follow all actions. The Maintenance Window will simply launch a Run Command, so you could go to that console screen too. If you enabled logging to S3, you could find the output of the Run Command over there. If not, you can view a (truncated) output via the Run Command itself:</p>
<p><a href="https://cloudar.be/wp-content/uploads/2017/05/output.jpg"><img loading="lazy" decoding="async" class="alignnone wp-image-3750" src="https://cloudar.be/wp-content/uploads/2017/05/output.jpg" alt="" width="692" height="329" /></a></p>
<pre class="lang:default decode:true">Patch Summary for i-07ca5621af38f256d
PatchGroup          : dev
BaselineId          : pb-06101e06cf8506be6
SnapshotId          : 317f2b72-2612-4740-95af-c7b3d8fb6d1e
OwnerInformation    : 
OperationType       : Install
OperationStartTime  : 2017-05-29T11:00:14.0000000Z
OperationEndTime    : 2017-05-29T11:03:18.7164313Z
InstalledCount      : 1
InstalledOtherCount : 6
FailedCount         : 0
MissingCount        : 0
NotApplicableCount  : 3

EC2AMAZ-EA5SH8I - PatchBaselineOperations Installation Results - 2017-05-29T11:03:19.537

KbArticleId Installed   Message
----------- ----------- -----------
KB890830    Yes         Success</pre>
<p>If we now go back to the &#8220;Managed Instances&#8221; page and look at the &#8220;Patch&#8221; tab of our test instance, we will see it is not missing any updates anymore!</p>
<p><a href="https://cloudar.be/wp-content/uploads/2017/05/final_status.jpg"><img loading="lazy" decoding="async" class="alignnone wp-image-3751" src="https://cloudar.be/wp-content/uploads/2017/05/final_status.jpg" alt="" width="847" height="264" /></a></p>
<p>&nbsp;</p>
<p>Success! Another <a href="https://cloudar.be/wp-content/uploads/2017/05/images.jpg"><img loading="lazy" decoding="async" class="wp-image-3752 alignnone" src="https://cloudar.be/wp-content/uploads/2017/05/images.jpg" alt="" width="26" height="26" /></a> on the Automation checklist!</p>
<p>&nbsp;</p>
<p>Rutger</p>
<p>The post <a href="https://cloudar.be/awsblog/windows-servers-patching-with-aws-ec2-systems-manager/">Windows servers patching with AWS EC2 Systems Manager</a> appeared first on <a href="https://cloudar.be">Cloudar</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://cloudar.be/awsblog/windows-servers-patching-with-aws-ec2-systems-manager/feed/</wfw:commentRss>
			<slash:comments>6</slash:comments>
		
		
			</item>
		<item>
		<title>AWS detailed billing with resources and tags</title>
		<link>https://cloudar.be/awsblog/aws-tagged-billing/</link>
					<comments>https://cloudar.be/awsblog/aws-tagged-billing/#respond</comments>
		
		<dc:creator><![CDATA[Bart Van Hecke]]></dc:creator>
		<pubDate>Mon, 04 Aug 2014 19:29:46 +0000</pubDate>
				<category><![CDATA[AWS Blog]]></category>
		<category><![CDATA[Amazon Web Services]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[Detailed billing]]></category>
		<category><![CDATA[Reporting]]></category>
		<category><![CDATA[resources]]></category>
		<category><![CDATA[Tags]]></category>
		<category><![CDATA[usage]]></category>
		<guid isPermaLink="false">https://cloudar.be/?p=365</guid>

					<description><![CDATA[<p>To get a complete overview of your Amazon Web Services usage based on resource usage and tags, you need to enable detailed billing within your AWS account. In order to accomplish this, browse to your AWS Account Billing Preferences &#160; &#160; &#160; First, enable the &#8220;Monthly report&#8221; to be able to receive a detailed report [&#8230;]</p>
<p>The post <a href="https://cloudar.be/awsblog/aws-tagged-billing/">AWS detailed billing with resources and tags</a> appeared first on <a href="https://cloudar.be">Cloudar</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>To get a complete overview of your Amazon Web Services usage based on resource usage and tags, you need to enable detailed billing within your AWS account.<br />
In order to accomplish this, browse to <a title="AWS billing preferences" href="https://console.aws.amazon.com/billing/home#/preferences" target="_blank" rel="noopener noreferrer"> your AWS Account Billing Preferences</a></p>
<p>&nbsp;</p>
<p><a href="https://cloudar.be/wp-content/uploads/2015/05/preferences.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-659" src="https://cloudar.be/wp-content/uploads/2015/05/preferences.png" alt="Billing Preferences" width="1178" height="682" /></a></p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<ul>
<li>First, enable the &#8220;Monthly report&#8221; to be able to receive a detailed report of your AWS usage.</li>
<li>Secondly, you&#8217;ll need to enable &#8220;Receive Billing Reports&#8221; and <a href="http://docs.aws.amazon.com/AmazonS3/latest/UG/CreatingaBucket.html" target="_blank" rel="noopener noreferrer">create an Amazon S3 Bucket</a> in which estimated and monthly billing reports can be stored</li>
<li>Once the Amazon S3 Bucket has been created, you must apply appropriate permissions to your S3 bucket through a Bucket Policy to allow reports to be published. A policy grants AWS access to publish your report to your bucket.<br />
To add the policy to your bucket, you will need to sign in to the <a title="AWS Console" href="https://console.aws.amazon.com/s3" target="_blank" rel="noopener noreferrer">AWS Console</a> and add permissions to your bucket. For more information, please review <a title="Editing bucket permissions" href="http://docs.amazonwebservices.com/AmazonS3/latest/UG/EditingBucketPermissions.html" target="_blank" rel="noopener noreferrer">Editing Bucket Permissions</a>Below is an Amazon S3 Bucket sample policy:</li>
</ul>
<p>&nbsp;</p>
<pre>{
  "Version": "2008-10-17",
  "Id": "Policy1335892530063",
  "Statement": [
    {
      "Sid": "Stmt1335892150622",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::386209384616:root"
      },
      "Action": [
        "s3:GetBucketAcl",
        "s3:GetBucketPolicy"
      ],
      "Resource": "arn:aws:s3:::cloudar-production"
    },
    {
      "Sid": "Stmt1335892526596",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::386209384616:root"
      },
      "Action": [
        "s3:PutObject"
      ],
      "Resource": "arn:aws:s3:::cloudar-production/*"
    }
  ]
}
</pre>
<p>&nbsp;</p>
<p>Once you created your S3 Bucket, you&#8217;ll be able to activate &#8220;receive billing reports&#8221; and verify your S3 bucket policy. When your S3 bucket is validated, you can save your preferences and AWS will deliver these detailed billing reports to your Amazon S3 Bucket.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>The post <a href="https://cloudar.be/awsblog/aws-tagged-billing/">AWS detailed billing with resources and tags</a> appeared first on <a href="https://cloudar.be">Cloudar</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://cloudar.be/awsblog/aws-tagged-billing/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
