<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[DevOps with Yusadolat]]></title><description><![CDATA[I love to help startups deliver better software and provide more, control over their environment and software development process with the help of modern tools and automation.]]></description><link>https://blog.yusadolat.me</link><generator>RSS for Node</generator><lastBuildDate>Sat, 11 Apr 2026 12:03:00 GMT</lastBuildDate><atom:link href="https://blog.yusadolat.me/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[From SD to Silicon: Making your NanoPi Production-Ready]]></title><description><![CDATA[We’ve all been there. You spend hours perfectly tuning your OpenWrt configuration, setting up your captive portals, and hardening your firewall. Then, six months later, the system hangs. Why? Because ]]></description><link>https://blog.yusadolat.me/from-sd-to-silicon-making-your-nanopi-production-ready</link><guid isPermaLink="true">https://blog.yusadolat.me/from-sd-to-silicon-making-your-nanopi-production-ready</guid><category><![CDATA[nanoparticle]]></category><category><![CDATA[OpenWRT]]></category><category><![CDATA[microsd]]></category><dc:creator><![CDATA[Yusuf Adeyemo]]></dc:creator><pubDate>Sun, 29 Mar 2026 15:52:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/5a38306144029e2b31a787c5/4ae89d97-0b7a-4530-94c6-796934e24217.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We’ve all been there. You spend hours perfectly tuning your OpenWrt configuration, setting up your captive portals, and hardening your firewall. Then, six months later, the system hangs. Why? Because the $5 microSD card you bought decided it had written its last byte of logs.</p>
<p>In the world of DevOps, we don't like single points of failure. If you're running a NanoPi (like the R5S or R6S), you have a secret weapon: <strong>Internal eMMC storage.</strong> It’s faster, more resilient, and physically soldered to the board.</p>
<p>Here is the "battle-tested" guide to migrating your OS from the card to the chip.</p>
<h3>1. The "Identify Your Target" Phase</h3>
<p>Before you start throwing data around, you need to know who is who. In Linux, your storage devices aren't labeled "SD Card" or "Internal Drive"—they are blocks.</p>
<p>Most people reach for <code>lsblk</code>, but many lean OpenWrt builds don't include it. Instead, we go straight to the source:</p>
<pre><code class="language-bash">cat /proc/partitions
</code></pre>
<p><strong>The Discovery:</strong> You’ll see two main contenders. Usually, <code>mmcblk0</code> is your boot source (the SD card) and <code>mmcblk1</code> is the factory-fresh internal eMMC. Look at the sizes. If your SD card is 32GB and the eMMC is 30GB, you’ve found your targets.</p>
<h3>2. The Great Migration (The "DD" Command)</h3>
<p>We use the oldest tool in the shed: <code>dd</code>. It stands for "Data Duplicator," but many call it "Disk Destroyer" because if you swap the <code>if</code> (input) and <code>of</code> (output), you’ll wipe your work.</p>
<p>Run the clone:</p>
<pre><code class="language-bash">dd if=/dev/mmcblk0 of=/dev/mmcblk1 bs=1M &amp;&amp; sync
</code></pre>
<p><strong>A Note on the "Silent Treatment":</strong> Don't panic if the cursor just blinks for 15 minutes. Most embedded versions of <code>dd</code> don't show a progress bar. It’s moving 30GB of data. Grab a coffee. If you’re anxious, open a second SSH window and run <code>pgrep -l dd</code> to make sure the heart is still beating.</p>
<h3>3. The "No Space Left" Error (The Good Kind)</h3>
<p>At the end, you might see: <code>dd: error writing '/dev/mmcblk1': No space left on device</code>.</p>
<p>In any other context, this is a failure. Here? <strong>It’s a victory.</strong> It means the eMMC was slightly smaller than the SD card, and <code>dd</code> filled every single available block. Since the actual OpenWrt partitions are at the very beginning of the drive, the OS is safe and sound on the new chip.</p>
<h3>4. The Moment of Truth</h3>
<p>The "Cut-over" is simple but nerve-wracking:</p>
<ol>
<li><p>Run <code>poweroff</code>.</p>
</li>
<li><p><strong>Pull the microSD card out.</strong> This is the most important step—if the card is in, the NanoPi will always prefer it over the internal storage.</p>
</li>
<li><p>Power it back on.</p>
</li>
</ol>
<p>If that SSH prompt returns without the SD card in the slot, you’re officially running on the metal.</p>
<hr />
<p><strong>Pro-Tip for the Reader:</strong> After booting from eMMC, your "Free Space" might look smaller than expected because you cloned a fixed-size image. Your next task is to use <code>parted</code> or the LuCI web interface to expand the partition to fill the rest of your eMMC.</p>
<p><strong>Happy Routing.</strong></p>
]]></content:encoded></item><item><title><![CDATA[Understanding TOTP: What Really Happens When You Generate That 6-Digit Code]]></title><description><![CDATA[This article started from a tweet.
Someone on Twitter said they "lowkey want to understand the technology behind Google Authenticator" and I dropped a quick reply - explaining that it's basically TOTP: your device and the server share a secret key, b...]]></description><link>https://blog.yusadolat.me/understanding-totp-what-really-happens-when-you-generate-that-6-digit-code</link><guid isPermaLink="true">https://blog.yusadolat.me/understanding-totp-what-really-happens-when-you-generate-that-6-digit-code</guid><category><![CDATA[totp]]></category><category><![CDATA[authentication]]></category><category><![CDATA[Google]]></category><category><![CDATA[How It Works]]></category><dc:creator><![CDATA[Yusuf Adeyemo]]></dc:creator><pubDate>Mon, 08 Dec 2025 16:26:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765210568805/784ebd89-c559-4001-8f48-37aaffb6378b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This article started from a tweet.</p>
<p>Someone on Twitter said they "lowkey want to understand the technology behind Google Authenticator" and I dropped a quick reply - explaining that it's basically TOTP: your device and the server share a secret key, both compute a code using HMAC-SHA1 and the current 30-second time window. No network calls. No "previous code." Same secret + same time slice = same 6-digit code.</p>
<p>That reply got some traction, and a few people DM me for a deeper breakdown. So here we are.</p>
<p>If you've ever wondered how your phone generates the exact same 6-digit code the server expects - with no internet request, no sync, nothing - this one's for you.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765190307368/31f94bf0-e6c8-4fbb-bab2-5077e9dc415f.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-the-problem-with-passwords">The Problem With Passwords</h2>
<p>Passwords are static. Once someone has it, they have it forever - or until you change it. Even with a strong password, you're one phishing attack or database breach away from compromise.</p>
<p>Two-factor authentication fixes this by adding something that changes. But here's the catch - if your phone needs to call a server every time to get a new code, that's a point of failure. What happens when you're offline? On a plane? In a basement with no signal?</p>
<p>This is where TOTP comes in.</p>
<h2 id="heading-totp-time-based-one-time-password">TOTP - Time-based One-Time Password</h2>
<p>TOTP is defined in RFC 6238, but don't let the RFC scare you. The core idea is dead simple:</p>
<p><strong>Both your phone and the server share a secret. They both know the current time. They both do the same math. They both get the same answer.</strong></p>
<p>That's it. No network calls. No synchronization requests. Just two parties doing identical calculations independently.</p>
<h2 id="heading-the-setup-that-qr-code-you-scanned">The Setup - That QR Code You Scanned</h2>
<p>When you enable 2FA on any service, they show you a QR code. That QR code contains a URL that looks something like this:</p>
<pre><code class="lang-plaintext">otpauth://totp/MyService:yusuf@yusadolat.me?secret=JBSWY3DPEHPK3PXP&amp;issuer=MyService
</code></pre>
<p>The important part is the <code>secret</code>. This is a base32-encoded string that both your authenticator app and the server will store. This shared secret is the foundation of everything.</p>
<p>You scan it once. Your app saves it. The server saves it. They never exchange it again.</p>
<h2 id="heading-the-math-how-codes-get-generated">The Math - How Codes Get Generated</h2>
<p>Every 30 seconds, both sides perform this calculation:</p>
<p><strong>Step 1: Get the current time window</strong></p>
<p>Take the current Unix timestamp and divide by 30. Floor it.</p>
<pre><code class="lang-plaintext">time_step = floor(current_unix_time / 30)
</code></pre>
<p>Right now, as I write this, the Unix timestamp is around 1733644800. Divided by 30, floored, gives us 57788160. This number changes every 30 seconds.</p>
<p><strong>Step 2: Run HMAC-SHA1</strong></p>
<p>Feed the time step and the shared secret into HMAC-SHA1:</p>
<pre><code class="lang-plaintext">hmac_result = HMAC-SHA1(secret, time_step)
</code></pre>
<p>This produces a 20-byte hash. It looks like random garbage, but it's deterministic - same inputs always give same outputs.</p>
<p><strong>Step 3: Dynamic Truncation</strong></p>
<p>20 bytes is too long for humans to type. So we extract 4 bytes from a specific position (determined by the last nibble of the hash), convert to an integer, and take modulo 1,000,000.</p>
<pre><code class="lang-plaintext">offset = hmac_result[19] &amp; 0x0f
code = (hmac_result[offset:offset+4] &amp; 0x7fffffff) % 1000000
</code></pre>
<p>Boom. You have your 6-digit code.</p>
<h2 id="heading-why-this-is-actually-clever">Why This Is Actually Clever</h2>
<p>Think about what just happened:</p>
<ol>
<li><p><strong>No network needed</strong> - Your phone doesn't call anyone. The server doesn't push anything. Both just compute.</p>
</li>
<li><p><strong>Codes expire automatically</strong> - Because time moves forward, old codes become useless. Even if someone shoulder-surfs your code, they have maybe 30 seconds to use it.</p>
</li>
<li><p><strong>Can't predict future codes</strong> - Without the secret, you can't compute tomorrow's codes. The HMAC function is one-way.</p>
</li>
<li><p><strong>Replay attacks fail</strong> - Use a code once, the server marks that time window as used. Try it again, rejected.</p>
</li>
</ol>
<h2 id="heading-when-things-go-wrong">When Things Go Wrong</h2>
<p>The system assumes both parties agree on what time it is. This is usually fine - your phone syncs with NTP servers, and servers have accurate clocks.</p>
<p>But I've seen people with phones that have "manual time" set, drifting by minutes. Their codes stop working and they have no idea why. The server is computing codes for 10:45:00, their phone is computing for 10:43:00. Different time windows, different codes.</p>
<p>Most implementations allow a small tolerance - they'll accept codes from one time window before or after. But drift too far and you're locked out.</p>
<h2 id="heading-the-recovery-code-situation">The Recovery Code Situation</h2>
<p>Those backup codes you're told to save somewhere? They're not TOTP. They're just long random strings stored in a database. Use one, it gets deleted. No time component, no algorithm - just a simple lookup.</p>
<p>Save them. Seriously. Losing access to your authenticator without backup codes is a special kind of pain.</p>
<h2 id="heading-show-me-the-code">Show Me The Code</h2>
<p>Here's a minimal Python implementation to make this concrete:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> hmac
<span class="hljs-keyword">import</span> hashlib
<span class="hljs-keyword">import</span> struct
<span class="hljs-keyword">import</span> time
<span class="hljs-keyword">import</span> base64

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">generate_totp</span>(<span class="hljs-params">secret: str</span>) -&gt; str:</span>
    <span class="hljs-comment"># Decode the base32 secret</span>
    key = base64.b32decode(secret.upper())

    <span class="hljs-comment"># Get current time step (30-second window)</span>
    time_step = int(time.time()) // <span class="hljs-number">30</span>

    <span class="hljs-comment"># Pack as big-endian 8-byte integer</span>
    time_bytes = struct.pack(<span class="hljs-string">'&gt;Q'</span>, time_step)

    <span class="hljs-comment"># Compute HMAC-SHA1</span>
    hmac_hash = hmac.new(key, time_bytes, hashlib.sha1).digest()

    <span class="hljs-comment"># Dynamic truncation</span>
    offset = hmac_hash[<span class="hljs-number">-1</span>] &amp; <span class="hljs-number">0x0f</span>
    code_int = struct.unpack(<span class="hljs-string">'&gt;I'</span>, hmac_hash[offset:offset+<span class="hljs-number">4</span>])[<span class="hljs-number">0</span>]
    code_int &amp;= <span class="hljs-number">0x7fffffff</span>
    code = code_int % <span class="hljs-number">1000000</span>

    <span class="hljs-keyword">return</span> <span class="hljs-string">f'<span class="hljs-subst">{code:<span class="hljs-number">06</span>d}</span>'</span>

<span class="hljs-comment"># Test it</span>
secret = <span class="hljs-string">'JBSWY3DPEHPK3PXP'</span>  <span class="hljs-comment"># Example secret, This is what you add setup key on Google Auth</span>
print(generate_totp(secret))
</code></pre>
<p>Want to see it work in real-time? Here's how to test:</p>
<ol>
<li><p>Open Google Authenticator (or any TOTP app)</p>
</li>
<li><p>Tap the <strong>+</strong> button to add a new account</p>
</li>
<li><p>Select <strong>"Enter a setup key"</strong></p>
</li>
<li><p>Enter any name (e.g., "TOTP Test")</p>
</li>
<li><p>For the key, enter: <code>JBSWY3DPEHPK3PXP</code></p>
</li>
<li><p>Make sure it's set to <strong>Time-based</strong></p>
</li>
<li><p>Save it</p>
</li>
</ol>
<p>Now run the Python script. The 6-digit code it prints should match what's showing in your authenticator app. If you're a few seconds off, wait for the next 30-second window and try again.</p>
<h2 id="heading-wrapping-up">Wrapping Up</h2>
<p>There's no cloud magic happening when your authenticator generates codes. It's just math - the same math running independently on your device and the server, anchored to the same clock.</p>
<p>Understanding this changes how you think about 2FA. It's not some opaque security feature. It's a clever application of cryptographic primitives that's been working reliably for over a decade.</p>
<p>Next time you punch in those 6 digits, you'll know exactly what's happening behind the scenes.</p>
<hr />
<p><em>If you found this useful, I write about DevOps, security, and cloud infrastructure. Connect with me on Twitter</em> <a target="_blank" href="https://twitter.com/Yusadolat"><em>@Yusadolat</em></a><em>.</em></p>
]]></content:encoded></item><item><title><![CDATA[Speed Up Your AWS CodeBuild Docker Builds by 25% Using ECR as a Remote Cache]]></title><description><![CDATA[Have you ever sat there waiting for your CodeBuild project to rebuild your entire Docker image... again? Even though you only changed a single line of code?
Yeah, me too. And it's frustrating.
Today, I'm going to show you how I reduced our Docker bui...]]></description><link>https://blog.yusadolat.me/speed-up-your-aws-codebuild-docker-builds-by-25-using-ecr-as-a-remote-cache</link><guid isPermaLink="true">https://blog.yusadolat.me/speed-up-your-aws-codebuild-docker-builds-by-25-using-ecr-as-a-remote-cache</guid><category><![CDATA[AWS]]></category><category><![CDATA[Docker]]></category><category><![CDATA[caching]]></category><category><![CDATA[ecr]]></category><dc:creator><![CDATA[Yusuf Adeyemo]]></dc:creator><pubDate>Mon, 20 Oct 2025 08:50:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1760950073357/1a867b14-b2f3-45fc-93ea-76aed8fbef6f.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<p>Have you ever sat there waiting for your CodeBuild project to rebuild your entire Docker image... again? Even though you only changed a single line of code?</p>
<p>Yeah, me too. And it's frustrating.</p>
<p>Today, I'm going to show you how I reduced our Docker build times from <strong>~7 minutes down to ~5 minutes</strong> (that's about 25-30% faster!) by implementing Amazon ECR as a persistent cache backend. This is based on an <a target="_blank" href="https://aws.amazon.com/blogs/devops/reduce-docker-image-build-time-on-aws-codebuild-using-amazon-ecr-as-a-remote-cache/">official AWS blog post</a>, but I'll walk you through the practical implementation.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760945938051/de053217-dfce-4c6b-861a-8d1bc516da8b.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-the-problem-why-your-builds-are-slow">The Problem: Why Your Builds Are Slow</h2>
<p>Here's the thing about AWS CodeBuild: every build runs in a <strong>completely fresh, isolated environment</strong>. That means:</p>
<ul>
<li><p>No build artifacts carry over between builds</p>
</li>
<li><p>Every build starts from scratch</p>
</li>
<li><p>CodeBuild's "local cache" is temporary and unreliable (works on a "best-effort" basis)</p>
</li>
<li><p>If your builds happen at different times throughout the day, the local cache probably isn't helping you</p>
</li>
</ul>
<p>So even if you only changed one line in your code, CodeBuild rebuilds every single Docker layer. Every. Single. Time.</p>
<h2 id="heading-the-solution-ecr-registry-cache-backend">The Solution: ECR Registry Cache Backend</h2>
<p>The solution is surprisingly elegant: store your Docker layer cache <strong>persistently</strong> in Amazon ECR (Elastic Container Registry). Think of it as a separate "cache image" that lives alongside your actual application image.</p>
<p>Here's how it works:</p>
<ol>
<li><p><strong>First Build</strong>: Build from scratch, then export the cache to ECR as a separate image</p>
</li>
<li><p><strong>Subsequent Builds</strong>: Import the cache from ECR, rebuild only what changed, export the updated cache back</p>
</li>
</ol>
<p>The beauty? Your cache is always available, no matter when you trigger a build.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760947422625/fc549048-9e48-4ef6-b4e9-3a8260ff70da.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-what-youll-need">What You'll Need</h2>
<p>Before we start, make sure you have:</p>
<ul>
<li><p>An existing AWS CodeBuild project that builds Docker images</p>
</li>
<li><p>An ECR repository where your images are stored</p>
</li>
<li><p>IAM permissions for your CodeBuild role to push/pull from ECR (if you can already push images, you're good!)</p>
</li>
<li><p>About 10 minutes to implement this</p>
</li>
</ul>
<h2 id="heading-step-by-step-implementation">Step-by-Step Implementation</h2>
<h3 id="heading-step-1-understanding-your-current-buildspec">Step 1: Understanding Your Current Buildspec</h3>
<p>Your current buildspec probably looks something like this:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-number">0.2</span>
<span class="hljs-attr">phases:</span>
  <span class="hljs-attr">install:</span>
    <span class="hljs-attr">commands:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">aws</span> <span class="hljs-string">ecr</span> <span class="hljs-string">get-login-password</span> <span class="hljs-string">|</span> <span class="hljs-string">docker</span> <span class="hljs-string">login</span> <span class="hljs-string">...</span>

  <span class="hljs-attr">build:</span>
    <span class="hljs-attr">commands:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">build</span> <span class="hljs-string">-t</span> <span class="hljs-string">myapp:latest</span> <span class="hljs-string">.</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">tag</span> <span class="hljs-string">myapp:latest</span> <span class="hljs-string">$ECR_REPO:latest</span>

  <span class="hljs-attr">post_build:</span>
    <span class="hljs-attr">commands:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">push</span> <span class="hljs-string">$ECR_REPO:latest</span>
</code></pre>
<p>This is the "basic" approach. Every build starts from zero.</p>
<p><strong>[📸 IMAGE SUGGESTION: Split screen showing "Basic Build" vs "Cached Build" with layer rebuilding visualization]</strong></p>
<h3 id="heading-step-2-add-cache-tag-variable">Step 2: Add Cache Tag Variable</h3>
<p>First, let's define a separate tag for our cache image. In your <code>install</code> phase, add:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">install:</span>
  <span class="hljs-attr">commands:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">CACHE_TAG=dev-cache</span>  <span class="hljs-comment"># or prod-cache, staging-cache, etc.</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">IMAGE_TAG=latest</span>     <span class="hljs-comment"># your actual app image tag</span>
</code></pre>
<p>This creates a separate cache image (e.g., <code>myapp:dev-cache</code>) that's distinct from your application image (<code>myapp:latest</code>).</p>
<h3 id="heading-step-3-create-the-buildx-builder">Step 3: Create the Buildx Builder</h3>
<p>Here's the key part: Docker's default builder doesn't support registry cache backends. We need to create a new builder using <strong>buildx</strong> with the <strong>containerd</strong> driver.</p>
<p>Add this to your <code>install</code> phase:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">install:</span>
  <span class="hljs-attr">commands:</span>
    <span class="hljs-comment"># ... your existing commands ...</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">buildx</span> <span class="hljs-string">create</span> <span class="hljs-string">--name</span> <span class="hljs-string">containerd</span> <span class="hljs-string">--driver=docker-container</span> <span class="hljs-string">--driver-opt</span> <span class="hljs-string">default-load=true</span> <span class="hljs-string">--use</span> <span class="hljs-string">||</span> <span class="hljs-string">docker</span> <span class="hljs-string">buildx</span> <span class="hljs-string">use</span> <span class="hljs-string">containerd</span>
</code></pre>
<p><strong>What's happening here?</strong></p>
<ul>
<li><p><code>docker buildx create</code>: Creates a new builder instance</p>
</li>
<li><p><code>--driver=docker-container</code>: Uses containerd (required for registry cache)</p>
</li>
<li><p><code>--driver-opt default-load=true</code>: Loads built images into local Docker (important!)</p>
</li>
<li><p><code>|| docker buildx use containerd</code>: If the builder already exists, just switch to it</p>
</li>
</ul>
<h3 id="heading-step-4-replace-your-docker-build-command">Step 4: Replace Your Docker Build Command</h3>
<p>Now replace your regular <code>docker build</code> command with the new buildx version:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">build:</span>
  <span class="hljs-attr">commands:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">|
      docker buildx build \
        --builder=containerd \
        --cache-from type=registry,ref=$ECR_REPO:$CACHE_TAG \
        --cache-to type=registry,ref=$ECR_REPO:$CACHE_TAG,mode=max,image-manifest=true \
        -t $ECR_REPO:$IMAGE_TAG \
        --load \
        .</span>
</code></pre>
<p>Let me break down what each flag does:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760947840104/08c0c372-5c0c-4687-9c69-1e455b4fb821.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p><code>--builder=containerd</code>: Use the builder we just created</p>
</li>
<li><p><code>--cache-from type=registry,ref=$ECR_REPO:$CACHE_TAG</code>: <strong>Import cache</strong> from ECR</p>
</li>
<li><p><code>--cache-to type=registry,ref=$ECR_REPO:$CACHE_TAG,mode=max,image-manifest=true</code>: <strong>Export cache</strong> back to ECR</p>
<ul>
<li><p><code>mode=max</code>: Export all layers (recommended for best caching)</p>
</li>
<li><p><code>image-manifest=true</code>: Required for ECR storage</p>
</li>
</ul>
</li>
<li><p><code>-t $ECR_REPO:$IMAGE_TAG</code>: Tag your final image as usual</p>
</li>
<li><p><code>--load</code>: Load the built image into local Docker (so you can run it in post_build)</p>
</li>
<li><p><code>.</code>: Your Dockerfile location</p>
</li>
</ul>
<h3 id="heading-step-5-complete-example-buildspec">Step 5: Complete Example Buildspec</h3>
<p>Here's what a complete, production-ready buildspec looks like:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-number">0.2</span>
<span class="hljs-attr">phases:</span>
  <span class="hljs-attr">install:</span>
    <span class="hljs-attr">commands:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">echo</span> <span class="hljs-string">Logging</span> <span class="hljs-string">in</span> <span class="hljs-string">to</span> <span class="hljs-string">Amazon</span> <span class="hljs-string">ECR</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">aws</span> <span class="hljs-string">ecr</span> <span class="hljs-string">get-login-password</span> <span class="hljs-string">--region</span> <span class="hljs-string">$AWS_REGION</span> <span class="hljs-string">|</span> <span class="hljs-string">docker</span> <span class="hljs-string">login</span> <span class="hljs-string">--username</span> <span class="hljs-string">AWS</span> <span class="hljs-string">--password-stdin</span> <span class="hljs-string">$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">CACHE_TAG=dev-cache</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">IMAGE_TAG=latest</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">ECR_REPO=$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/myapp</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">buildx</span> <span class="hljs-string">create</span> <span class="hljs-string">--name</span> <span class="hljs-string">containerd</span> <span class="hljs-string">--driver=docker-container</span> <span class="hljs-string">--driver-opt</span> <span class="hljs-string">default-load=true</span> <span class="hljs-string">--use</span> <span class="hljs-string">||</span> <span class="hljs-string">docker</span> <span class="hljs-string">buildx</span> <span class="hljs-string">use</span> <span class="hljs-string">containerd</span>

  <span class="hljs-attr">build:</span>
    <span class="hljs-attr">commands:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">echo</span> <span class="hljs-string">Build</span> <span class="hljs-string">started</span> <span class="hljs-string">on</span> <span class="hljs-string">`date`</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">|
        docker buildx build \
          --builder=containerd \
          --cache-from type=registry,ref=$ECR_REPO:$CACHE_TAG \
          --cache-to type=registry,ref=$ECR_REPO:$CACHE_TAG,mode=max,image-manifest=true \
          -t $ECR_REPO:$IMAGE_TAG \
          --load \
          .
</span>      <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">tag</span> <span class="hljs-string">$ECR_REPO:$IMAGE_TAG</span> <span class="hljs-string">$ECR_REPO:latest</span>

  <span class="hljs-attr">post_build:</span>
    <span class="hljs-attr">commands:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">echo</span> <span class="hljs-string">Build</span> <span class="hljs-string">completed</span> <span class="hljs-string">on</span> <span class="hljs-string">`date`</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">push</span> <span class="hljs-string">$ECR_REPO:$IMAGE_TAG</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">push</span> <span class="hljs-string">$ECR_REPO:latest</span>

<span class="hljs-attr">artifacts:</span>
  <span class="hljs-attr">files:</span> 
    <span class="hljs-bullet">-</span> <span class="hljs-string">imageDefinitions.json</span>
</code></pre>
<h3 id="heading-step-6-update-your-codebuild-project">Step 6: Update Your CodeBuild Project</h3>
<p>You can update your buildspec in two ways:</p>
<p><strong>Option 1: If your buildspec is in your repo</strong> Just commit the changes and push. CodeBuild will pick up the new buildspec automatically.</p>
<p><strong>Option 2: If your buildspec is defined in CodeBuild</strong> Use the AWS CLI:</p>
<pre><code class="lang-bash">aws codebuild update-project --name your-project-name --cli-input-json file://buildspec.json
</code></pre>
<p>Or update it through the AWS Console: CodeBuild → Your Project → Edit → Buildspec</p>
<h2 id="heading-what-to-expect-first-build-vs-subsequent-builds">What to Expect: First Build vs Subsequent Builds</h2>
<h3 id="heading-first-build-the-investment">First Build (The Investment)</h3>
<p>Your first build after implementing this will actually take <strong>slightly longer</strong> (maybe 30-60 seconds more). Don't panic! This is normal.</p>
<p>Here's what's happening:</p>
<ol>
<li><p>Creating the buildx builder (~5-10 seconds)</p>
</li>
<li><p>Attempting to import cache (fails - no cache exists yet)</p>
</li>
<li><p>Building all layers from scratch</p>
</li>
<li><p><strong>Exporting the cache to ECR</strong> (new step, adds ~20-40 seconds)</p>
</li>
</ol>
<p>You'll see messages like:</p>
<pre><code class="lang-plaintext">=&gt; importing cache manifest from $ECR_REPO:dev-cache
=&gt; error: not found
</code></pre>
<p>This is expected! The cache doesn't exist yet.</p>
<h3 id="heading-subsequent-builds-the-payoff">Subsequent Builds (The Payoff)</h3>
<p>This is where the magic happens. Your next builds will:</p>
<ol>
<li><p>Successfully import the cache from ECR</p>
</li>
<li><p>Identify which layers haven't changed</p>
</li>
<li><p>Reuse cached layers (fast!)</p>
</li>
<li><p>Rebuild only the changed layers</p>
</li>
<li><p>Export the updated cache</p>
</li>
</ol>
<p>Expected time savings:</p>
<ul>
<li><p><strong>Before</strong>: 6-7 minutes (full rebuild every time)</p>
</li>
<li><p><strong>After</strong>: 5-5.5 minutes (25-30% faster!)</p>
</li>
<li><p><strong>Savings</strong>: 1-2 minutes per build</p>
</li>
</ul>
<p>If you're doing 10 builds a day, that's <strong>10-20 minutes saved daily</strong>. Over a month? That's <strong>5-10 hours</strong> of compute time and costs saved.</p>
<h2 id="heading-verifying-its-working">Verifying It's Working</h2>
<p>After your first build completes, check your ECR repository. You should now see <strong>two image tags</strong>:</p>
<ol>
<li><p>Your application image (e.g., <code>latest</code>)</p>
</li>
<li><p>Your cache image (e.g., <code>dev-cache</code>)</p>
</li>
</ol>
<p>The cache image will be roughly the same size as your application image - this is normal! It's storing all the layer information.</p>
<h2 id="heading-troubleshooting-common-issues">Troubleshooting Common Issues</h2>
<h3 id="heading-issue-1-buildx-command-not-found">Issue 1: "buildx: command not found"</h3>
<p><strong>Solution</strong>: Update your CodeBuild image to a newer version. Use <code>aws/codebuild/standard:7.0</code> or later (or the ARM equivalent).</p>
<h3 id="heading-issue-2-cache-import-keeps-failing">Issue 2: Cache Import Keeps Failing</h3>
<p><strong>Solution</strong>: Check your IAM permissions. Your CodeBuild role needs:</p>
<ul>
<li><p><code>ecr:BatchGetImage</code></p>
</li>
<li><p><code>ecr:GetDownloadUrlForLayer</code></p>
</li>
<li><p><code>ecr:BatchCheckLayerAvailability</code></p>
</li>
<li><p><code>ecr:PutImage</code></p>
</li>
<li><p><code>ecr:InitiateLayerUpload</code></p>
</li>
<li><p><code>ecr:UploadLayerPart</code></p>
</li>
<li><p><code>ecr:CompleteLayerUpload</code></p>
</li>
</ul>
<h3 id="heading-issue-3-build-hangs-at-exporting-cache">Issue 3: Build Hangs at "exporting cache"</h3>
<p><strong>Solution</strong>: Make sure <code>privilegedMode: true</code> is enabled in your CodeBuild environment settings. This is required for Docker-in-Docker operations.</p>
<h2 id="heading-advanced-multi-environment-setup">Advanced: Multi-Environment Setup</h2>
<p>If you have multiple environments (dev, staging, prod), use different cache tags for each:</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-string">CACHE_TAG=${ENVIRONMENT}-cache</span>  <span class="hljs-comment"># Results in: dev-cache, staging-cache, prod-cache</span>
</code></pre>
<p>This way:</p>
<ul>
<li><p>Dev builds don't invalidate staging cache</p>
</li>
<li><p>Each environment maintains its own optimized cache</p>
</li>
<li><p>You can still share a base cache if needed</p>
</li>
</ul>
<h2 id="heading-cost-considerations">Cost Considerations</h2>
<p><strong>Storage Cost</strong>: You're now storing an additional cache image in ECR. At roughly the same size as your app image, this might add $0.10-0.50/month per repository depending on image size.</p>
<p><strong>Compute Savings</strong>: Faster builds = less compute time. If you're saving 1-2 minutes per build and doing 10 builds/day, that's roughly 10-20 fewer compute hours per month. At ~$0.005/minute for <code>BUILD_GENERAL1_SMALL</code>, you could save $3-6/month.</p>
<p><strong>Net Result</strong>: Typically a small net savings, plus the huge developer experience win of faster feedback loops.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>By implementing ECR as a remote cache backend for your CodeBuild Docker builds, you get:</p>
<p>✅ <strong>Faster build times</strong><br />✅ <strong>Persistent, reliable caching</strong> across all builds<br />✅ <strong>Better layer reuse</strong> with intelligent cache management<br />✅ <strong>Minimal code changes</strong> (just updating your buildspec)<br />✅ <strong>Cost savings</strong> from reduced compute time</p>
<p>The implementation is straightforward, and the benefits are immediate (after the first build). Give it a try on your next project!</p>
<h2 id="heading-references">References</h2>
<ul>
<li><p><a target="_blank" href="https://aws.amazon.com/blogs/devops/reduce-docker-image-build-time-on-aws-codebuild-using-amazon-ecr-as-a-remote-cache/">AWS Blog: Reduce Docker image build time using ECR as a remote cache</a></p>
</li>
<li><p><a target="_blank" href="https://docs.docker.com/build/buildx/">Docker Buildx Documentation</a></p>
</li>
<li><p><a target="_blank" href="https://docs.docker.com/build/cache/backends/">Docker Cache Backends Documentation</a></p>
</li>
</ul>
<hr />
<p><strong>Got questions or run into issues?</strong> Drop a comment below - I'd love to hear about your experience implementing this!</p>
<hr />
]]></content:encoded></item><item><title><![CDATA[Deploying Next.js SSR on Cloudflare: The Complete Guide to OpenNext vs next-on-pages]]></title><description><![CDATA[After you complete this article, you will have a solid understanding of:

Why deploying Next.js SSR on Cloudflare is different from traditional hosting

The real difference between Cloudflare Pages with next-on-pages and OpenNext adapter

How the fre...]]></description><link>https://blog.yusadolat.me/deploying-nextjs-ssr-on-cloudflare-the-complete-guide-to-opennext-vs-next-on-pages</link><guid isPermaLink="true">https://blog.yusadolat.me/deploying-nextjs-ssr-on-cloudflare-the-complete-guide-to-opennext-vs-next-on-pages</guid><category><![CDATA[Next.js]]></category><category><![CDATA[cloudflare]]></category><category><![CDATA[edgecomputing]]></category><category><![CDATA[SSR]]></category><category><![CDATA[serverless]]></category><dc:creator><![CDATA[Yusuf Adeyemo]]></dc:creator><pubDate>Sat, 02 Aug 2025 10:55:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1754131361661/1b1f07e1-87db-48ce-b3bc-6a09b75b648e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<p>After you complete this article, you will have a solid understanding of:</p>
<ul>
<li><p>Why deploying Next.js SSR on Cloudflare is different from traditional hosting</p>
</li>
<li><p>The real difference between Cloudflare Pages with next-on-pages and OpenNext adapter</p>
</li>
<li><p>How the free tier's 100,000 daily requests can handle production traffic</p>
</li>
<li><p>Common deployment pitfalls that will waste hours of debugging</p>
</li>
<li><p>Which adapter to choose for your specific use case</p>
</li>
</ul>
<h2 id="heading-have-you-ever-seen-this-error-when-deploying-to-cloudflare">Have You Ever Seen This Error When Deploying to Cloudflare?</h2>
<p>If you've tried deploying a Next.js app to Cloudflare Pages, you've probably encountered this frustrating message:</p>
<pre><code class="lang-plaintext">Error: Dynamic server usage: Page couldn't be rendered statically because it used `cookies`.
</code></pre>
<p>Or even worse:</p>
<pre><code class="lang-plaintext">Error: The edge runtime does not support Node.js 'fs' module.
You can use 'fs' module only in Node.js runtime.
</code></pre>
<p>And then you wonder: "But I thought Cloudflare supports Next.js SSR now?"</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753633175139/b8514323-216a-4e88-a9d1-5aab5f5dd3c1.png" alt="A confused developer looking at two doors - one labeled &quot;Edge Runtime&quot; with a ⚠️ warning sign, another labeled &quot;Node.js Runtime&quot; with a ✅ check mark" class="image--center mx-auto" /></p>
<p>Let me clear up this confusion once and for all.</p>
<h2 id="heading-the-two-ways-to-deploy-nextjs-on-cloudflare">The Two Ways to Deploy Next.js on Cloudflare</h2>
<p>There are two completely different approaches to deploying Next.js on Cloudflare, and choosing the wrong one will cause endless headaches.</p>
<h3 id="heading-option-1-cloudflarenext-on-pages-the-limited-one">Option 1: @cloudflare/next-on-pages (The Limited One)</h3>
<p>This was the original way, and it comes with a massive limitation:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// Every server component needs this 👇</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> runtime = <span class="hljs-string">'edge'</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">Page</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-comment">// ❌ This will fail!</span>
  <span class="hljs-keyword">const</span> fs = <span class="hljs-built_in">require</span>(<span class="hljs-string">'fs'</span>);

  <span class="hljs-comment">// ❌ This will also fail!</span>
  <span class="hljs-keyword">const</span> bcrypt = <span class="hljs-built_in">require</span>(<span class="hljs-string">'bcrypt'</span>);

  <span class="hljs-comment">// ✅ Only Web APIs work</span>
  <span class="hljs-keyword">const</span> response = <span class="hljs-keyword">await</span> fetch(<span class="hljs-string">'https://api.example.com'</span>);

  <span class="hljs-keyword">return</span> &lt;div&gt;Limited to Edge Runtime&lt;/div&gt;;
}
</code></pre>
<h3 id="heading-option-2-opennextjscloudflare-the-game-changer">Option 2: @opennextjs/cloudflare (The Game Changer)</h3>
<p>Released in 2024 and now in v1.0-beta, this adapter supports the Node.js runtime:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// No runtime declaration needed! 🎉</span>

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">Page</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-comment">// ✅ This works now!</span>
  <span class="hljs-keyword">const</span> fs = <span class="hljs-built_in">require</span>(<span class="hljs-string">'fs'</span>);

  <span class="hljs-comment">// ✅ This works too!</span>
  <span class="hljs-keyword">const</span> crypto = <span class="hljs-built_in">require</span>(<span class="hljs-string">'crypto'</span>);

  <span class="hljs-comment">// ✅ Even database connections work</span>
  <span class="hljs-keyword">const</span> data = <span class="hljs-keyword">await</span> prisma.user.findMany();

  <span class="hljs-keyword">return</span> &lt;div&gt;Full Node.js support!&lt;/div&gt;;
}
</code></pre>
<h2 id="heading-setting-up-nextjs-ssr-with-opennext-the-right-way">Setting Up Next.js SSR with OpenNext (The Right Way)</h2>
<p>Let's deploy a real Next.js app with full SSR support on Cloudflare.</p>
<h3 id="heading-option-1-using-cloudflare-cli-recommended"><strong>Option 1: Using Cloudflare CLI (Recommended)</strong></h3>
<pre><code class="lang-bash"><span class="hljs-comment"># Create new Cloudflare project</span>
npm create cloudflare@latest my-nextjs-app -- \
  --framework=next --platform=workers

<span class="hljs-comment"># Deploy</span>
npm run deploy
</code></pre>
<p>Ready to see this in action? Check out the complete working example with full source code, deployment configuration, and step-by-step setup instructions <a target="_blank" href="https://github.com/Yusadolat/my-nextjs-app"><strong>here</strong></a></p>
<hr />
<h2 id="heading-migrating-your-existing-nextjs-app-to-cloudflare">Migrating Your Existing Next.js App to Cloudflare</h2>
<p>If you already have a Next.js app running on Vercel, AWS, or anywhere else, here's how to migrate it to Cloudflare.</p>
<h3 id="heading-step-1-check-your-current-setup">Step 1: Check Your Current Setup</h3>
<p>First, identify which features your app uses:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Check your Next.js version</span>
npm list next

<span class="hljs-comment"># Look for these in your code:</span>
grep -r <span class="hljs-string">"export const runtime"</span> . <span class="hljs-comment"># Edge runtime declarations</span>
grep -r <span class="hljs-string">"getServerSideProps"</span> .   <span class="hljs-comment"># SSR pages</span>
grep -r <span class="hljs-string">"getStaticProps"</span> .        <span class="hljs-comment"># SSG pages</span>
grep -r <span class="hljs-string">"app/api"</span> .               <span class="hljs-comment"># API routes</span>
</code></pre>
<h3 id="heading-step-2-choose-your-migration-path">Step 2: Choose Your Migration Path</h3>
<p><strong>If your app uses Vercel:</strong></p>
<pre><code class="lang-bash"><span class="hljs-comment"># Use Diverce for automatic migration!</span>
npx diverce migrate

<span class="hljs-comment"># This tool automatically:</span>
<span class="hljs-comment"># - Adds OpenNext to your project</span>
<span class="hljs-comment"># - Updates your configuration</span>
<span class="hljs-comment"># - Creates a PR with all changes</span>
</code></pre>
<p><strong>For manual migration:</strong></p>
<pre><code class="lang-bash"><span class="hljs-comment"># Install the OpenNext adapter</span>
npm install -D @opennextjs/cloudflare wrangler

<span class="hljs-comment"># Remove any Vercel-specific packages</span>
npm uninstall @vercel/analytics @vercel/og
</code></pre>
<h3 id="heading-step-3-update-your-configuration">Step 3: Update Your Configuration</h3>
<p><strong>Remove edge runtime declarations:</strong></p>
<pre><code class="lang-typescript"><span class="hljs-comment">// ❌ Remove these from all your files</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> runtime = <span class="hljs-string">'edge'</span>;
<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> dynamic = <span class="hljs-string">'force-dynamic'</span>;

<span class="hljs-comment">// ✅ OpenNext handles this automatically</span>
<span class="hljs-comment">// Just delete these lines!</span>
</code></pre>
<p><strong>Update your next.config.js:</strong></p>
<pre><code class="lang-javascript"><span class="hljs-comment">// next.config.js</span>
<span class="hljs-keyword">const</span> { setupDevPlatform } = <span class="hljs-built_in">require</span>(<span class="hljs-string">'@cloudflare/next-on-pages/next-dev'</span>);

<span class="hljs-comment">// ❌ Remove this if you have it</span>
<span class="hljs-keyword">if</span> (process.env.NODE_ENV === <span class="hljs-string">'development'</span>) {
  <span class="hljs-keyword">await</span> setupDevPlatform();
}

<span class="hljs-comment">// ✅ Add this instead</span>
<span class="hljs-keyword">import</span> { initOpenNextCloudflareForDev } <span class="hljs-keyword">from</span> <span class="hljs-string">"@opennextjs/cloudflare"</span>;
initOpenNextCloudflareForDev();

<span class="hljs-keyword">const</span> nextConfig = {
  <span class="hljs-comment">// Your existing config stays the same!</span>
  <span class="hljs-attr">images</span>: {
    <span class="hljs-attr">domains</span>: [<span class="hljs-string">'example.com'</span>],
  },
  <span class="hljs-comment">// Remove any Vercel-specific settings</span>
  <span class="hljs-comment">// outputFileTracing: false, ❌</span>
};

<span class="hljs-built_in">module</span>.exports = nextConfig;
</code></pre>
<h3 id="heading-step-4-handle-platform-specific-code">Step 4: Handle Platform-Specific Code</h3>
<p><strong>If you're using Vercel KV:</strong></p>
<pre><code class="lang-typescript"><span class="hljs-comment">// ❌ Before (Vercel KV)</span>
<span class="hljs-keyword">import</span> { kv } <span class="hljs-keyword">from</span> <span class="hljs-string">'@vercel/kv'</span>;
<span class="hljs-keyword">await</span> kv.set(<span class="hljs-string">'key'</span>, <span class="hljs-string">'value'</span>);

<span class="hljs-comment">// ✅ After (Cloudflare KV)</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">GET</span>(<span class="hljs-params">request: Request, env: Env</span>) </span>{
  <span class="hljs-keyword">await</span> env.MY_KV.put(<span class="hljs-string">'key'</span>, <span class="hljs-string">'value'</span>);
  <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> Response(<span class="hljs-string">'Saved!'</span>);
}
</code></pre>
<p><strong>If you're using Vercel Postgres:</strong></p>
<pre><code class="lang-typescript"><span class="hljs-comment">// ❌ Before (Vercel Postgres)</span>
<span class="hljs-keyword">import</span> { sql } <span class="hljs-keyword">from</span> <span class="hljs-string">'@vercel/postgres'</span>;
<span class="hljs-keyword">const</span> { rows } = <span class="hljs-keyword">await</span> sql<span class="hljs-string">`SELECT * FROM users`</span>;

<span class="hljs-comment">// ✅ After (Any PostgreSQL client works!)</span>
<span class="hljs-keyword">import</span> { Pool } <span class="hljs-keyword">from</span> <span class="hljs-string">'pg'</span>;
<span class="hljs-keyword">const</span> pool = <span class="hljs-keyword">new</span> Pool({
  connectionString: process.env.DATABASE_URL,
});
<span class="hljs-keyword">const</span> { rows } = <span class="hljs-keyword">await</span> pool.query(<span class="hljs-string">'SELECT * FROM users'</span>);
</code></pre>
<p><strong>If you're using Vercel Edge Config:</strong></p>
<pre><code class="lang-typescript"><span class="hljs-comment">// ❌ Before (Vercel Edge Config)</span>
<span class="hljs-keyword">import</span> { get } <span class="hljs-keyword">from</span> <span class="hljs-string">'@vercel/edge-config'</span>;
<span class="hljs-keyword">const</span> value = <span class="hljs-keyword">await</span> get(<span class="hljs-string">'featureFlag'</span>);

<span class="hljs-comment">// ✅ After (Cloudflare KV or D1)</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">GET</span>(<span class="hljs-params">request: Request, env: Env</span>) </span>{
  <span class="hljs-keyword">const</span> value = <span class="hljs-keyword">await</span> env.CONFIG_KV.get(<span class="hljs-string">'featureFlag'</span>);
  <span class="hljs-keyword">return</span> Response.json({ value });
}
</code></pre>
<h3 id="heading-step-5-update-environment-variables">Step 5: Update Environment Variables</h3>
<p><strong>Create a</strong> <code>.dev.vars</code> file for local development:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># .dev.vars (like .env.local but for Cloudflare)</span>
DATABASE_URL=postgresql://user:pass@localhost:5432/mydb
NEXTAUTH_SECRET=your-secret-key
NEXT_PUBLIC_API_URL=https://api.example.com
</code></pre>
<p><strong>Add bindings to wrangler.toml:</strong><br />Before that<br /><strong>Optional:</strong> create the R2 bucket for the cache If you added R2 incremental cache in your <code>open-next.config.ts</code>.</p>
<p>Run this command to create it:</p>
<p><code>npx wrangler r2 bucket create your-bucket-name</code></p>
<pre><code class="lang-toml"><span class="hljs-attr">name</span> = <span class="hljs-string">"my-nextjs-app"</span>
<span class="hljs-attr">main</span> = <span class="hljs-string">".open-next/worker.js"</span>
<span class="hljs-attr">compatibility_date</span> = <span class="hljs-string">"2025-07-27"</span>
<span class="hljs-attr">compatibility_flags</span> = [<span class="hljs-string">"nodejs_compat"</span>]


<span class="hljs-comment"># Static assets configuration</span>
<span class="hljs-section">[assets]</span>
<span class="hljs-attr">directory</span> = <span class="hljs-string">".open-next/assets"</span>
<span class="hljs-attr">binding</span> = <span class="hljs-string">"ASSETS"</span>


<span class="hljs-comment"># R2 Buckets</span>
<span class="hljs-section">[[r2_buckets]]</span>
<span class="hljs-attr">binding</span> = <span class="hljs-string">"NEXT_INC_CACHE_R2_BUCKET"</span>
<span class="hljs-attr">bucket_name</span> = <span class="hljs-string">"my-bucket-name"</span>
</code></pre>
<h2 id="heading-now-let-deploy">Now let deploy :</h2>
<pre><code class="lang-plaintext">//Run the command below
npm run deploy
</code></pre>
<h3 id="heading-if-everything-works-well-your-nextjs-app-will-be-live-on-cloudflares-global-network-within-seconds-accessible-via-a-workersdevhttpworkersdev-url-or-your-custom-domain-serving-from-280-locations-worldwide-with-automatic-ssl">If everything works well, your Next.js app will be live on Cloudflare's global network within seconds, accessible via a <code>*.</code><a target="_blank" href="http://workers.dev"><code>workers.dev</code></a> URL or your custom domain, serving from 280+ locations worldwide with automatic SSL.</h3>
<h3 id="heading-common-migration-issues">Common Migration Issues</h3>
<p><strong>Issue 1: Dynamic imports failing</strong></p>
<pre><code class="lang-typescript"><span class="hljs-comment">// ❌ This might fail</span>
<span class="hljs-keyword">const</span> MyComponent = dynamic(<span class="hljs-function">() =&gt;</span> <span class="hljs-keyword">import</span>(<span class="hljs-string">'./MyComponent'</span>), {
  ssr: <span class="hljs-literal">false</span>
});

<span class="hljs-comment">// ✅ Ensure proper configuration</span>
<span class="hljs-keyword">const</span> MyComponent = dynamic(
  <span class="hljs-function">() =&gt;</span> <span class="hljs-keyword">import</span>(<span class="hljs-string">'./MyComponent'</span>),
  { 
    ssr: <span class="hljs-literal">false</span>,
    loading: <span class="hljs-function">() =&gt;</span> &lt;div&gt;Loading...&lt;/div&gt;
  }
);
</code></pre>
<p><strong>Issue 2: File system access</strong></p>
<pre><code class="lang-typescript"><span class="hljs-comment">// ❌ This won't work in production</span>
<span class="hljs-keyword">import</span> fs <span class="hljs-keyword">from</span> <span class="hljs-string">'fs'</span>;
<span class="hljs-keyword">const</span> data = fs.readFileSync(<span class="hljs-string">'./data.json'</span>);

<span class="hljs-comment">// ✅ Use static imports or fetch</span>
<span class="hljs-keyword">import</span> data <span class="hljs-keyword">from</span> <span class="hljs-string">'./data.json'</span>;
<span class="hljs-comment">// OR</span>
<span class="hljs-keyword">const</span> response = <span class="hljs-keyword">await</span> fetch(<span class="hljs-string">'/data.json'</span>);
<span class="hljs-keyword">const</span> data = <span class="hljs-keyword">await</span> response.json();
</code></pre>
<p><strong>Issue 3: Image optimization</strong></p>
<pre><code class="lang-typescript"><span class="hljs-comment">// ✅ Next/Image works but needs configuration</span>
<span class="hljs-comment">// In next.config.js</span>
<span class="hljs-built_in">module</span>.<span class="hljs-built_in">exports</span> = {
  images: {
    loader: <span class="hljs-string">'custom'</span>,
    loaderFile: <span class="hljs-string">'./image-loader.js'</span>,
  },
};

<span class="hljs-comment">// image-loader.js</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">cloudflareLoader</span>(<span class="hljs-params">{ src, width, quality }</span>) </span>{
  <span class="hljs-keyword">const</span> params = [<span class="hljs-string">`width=<span class="hljs-subst">${width}</span>`</span>];
  <span class="hljs-keyword">if</span> (quality) {
    params.push(<span class="hljs-string">`quality=<span class="hljs-subst">${quality}</span>`</span>);
  }
  <span class="hljs-keyword">const</span> paramsString = params.join(<span class="hljs-string">','</span>);
  <span class="hljs-keyword">return</span> <span class="hljs-string">`/cdn-cgi/image/<span class="hljs-subst">${paramsString}</span>/<span class="hljs-subst">${src}</span>`</span>;
}
</code></pre>
<h3 id="heading-step-7-test-your-migration">Step 7: Test Your Migration</h3>
<pre><code class="lang-bash"><span class="hljs-comment"># 1. Build and preview locally</span>
npm run preview

<span class="hljs-comment"># 2. Check for errors</span>
<span class="hljs-comment"># Common errors and fixes:</span>

<span class="hljs-comment"># "Cannot find module 'fs'"</span>
<span class="hljs-comment"># → Remove file system operations</span>

<span class="hljs-comment"># "window is not defined"</span>
<span class="hljs-comment"># → Wrap in useEffect or check typeof window</span>

<span class="hljs-comment"># "Module not found: Can't resolve 'encoding'"</span>
<span class="hljs-comment"># → Add to externals in next.config.js</span>

<span class="hljs-comment"># 3. Test all your routes</span>
curl http://localhost:8787/api/health
curl http://localhost:8787/dashboard
</code></pre>
<h2 id="heading-the-free-tier-more-powerful-than-you-think">The Free Tier: More Powerful Than You Think</h2>
<p>Cloudflare's free tier includes:</p>
<ul>
<li><p><strong>100,000 requests per day</strong> (resets at midnight UTC)</p>
</li>
<li><p><strong>10ms CPU time per request</strong> (plenty for SSR)</p>
</li>
<li><p><strong>128MB memory per Worker</strong></p>
</li>
<li><p><strong>Unlimited static asset requests</strong> 🎉</p>
</li>
</ul>
<h3 id="heading-real-world-capacity-example">Real-World Capacity Example</h3>
<pre><code class="lang-typescript"><span class="hljs-comment">// Let's calculate what 100k requests means:</span>

<span class="hljs-comment">// Average SSR page: ~50ms total time (including I/O)</span>
<span class="hljs-comment">// CPU time used: ~5-10ms</span>
<span class="hljs-comment">// Memory used: ~30-50MB</span>

<span class="hljs-comment">// Daily capacity on free tier:</span>
<span class="hljs-comment">// - 100,000 page views</span>
<span class="hljs-comment">// - ~4,166 page views per hour</span>
<span class="hljs-comment">// - ~69 page views per minute</span>

<span class="hljs-comment">// That's enough for:</span>
<span class="hljs-comment">// - A blog with 50k daily visitors (2 pages each)</span>
<span class="hljs-comment">// - A SaaS dashboard with 10k daily active users</span>
<span class="hljs-comment">// - An e-commerce site with 20k daily shoppers</span>
</code></pre>
<h2 id="heading-advanced-features-that-actually-work">Advanced Features That Actually Work</h2>
<h3 id="heading-1-api-routes-with-full-nodejs">1. API Routes with Full Node.js</h3>
<pre><code class="lang-typescript"><span class="hljs-comment">// app/api/process/route.ts</span>
<span class="hljs-keyword">import</span> { createHash } <span class="hljs-keyword">from</span> <span class="hljs-string">'crypto'</span>;
<span class="hljs-keyword">import</span> { headers } <span class="hljs-keyword">from</span> <span class="hljs-string">'next/headers'</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">POST</span>(<span class="hljs-params">request: Request</span>) </span>{
  <span class="hljs-keyword">const</span> body = <span class="hljs-keyword">await</span> request.json();

  <span class="hljs-comment">// ✅ Node.js crypto works!</span>
  <span class="hljs-keyword">const</span> hash = createHash(<span class="hljs-string">'sha256'</span>)
    .update(body.data)
    .digest(<span class="hljs-string">'hex'</span>);

  <span class="hljs-comment">// ✅ Headers manipulation</span>
  <span class="hljs-keyword">const</span> headersList = headers();
  <span class="hljs-keyword">const</span> userAgent = headersList.get(<span class="hljs-string">'user-agent'</span>);

  <span class="hljs-comment">// ✅ Complex processing</span>
  <span class="hljs-keyword">const</span> result = <span class="hljs-keyword">await</span> processDataWithNodeAPIs(body);

  <span class="hljs-keyword">return</span> Response.json({ 
    hash, 
    processed: result,
    userAgent 
  });
}
</code></pre>
<h3 id="heading-2-middleware-that-scales">2. Middleware That Scales</h3>
<pre><code class="lang-typescript"><span class="hljs-comment">// middleware.ts</span>
<span class="hljs-keyword">import</span> { NextResponse } <span class="hljs-keyword">from</span> <span class="hljs-string">'next/server'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-keyword">type</span> { NextRequest } <span class="hljs-keyword">from</span> <span class="hljs-string">'next/server'</span>;

<span class="hljs-keyword">export</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">middleware</span>(<span class="hljs-params">request: NextRequest</span>) </span>{
  <span class="hljs-comment">// Runs at the edge for EVERY request</span>
  <span class="hljs-keyword">const</span> country = request.geo?.country || <span class="hljs-string">'US'</span>;

  <span class="hljs-comment">// Add custom headers</span>
  <span class="hljs-keyword">const</span> response = NextResponse.next();
  response.headers.set(<span class="hljs-string">'x-user-country'</span>, country);

  <span class="hljs-comment">// Redirect based on geo</span>
  <span class="hljs-keyword">if</span> (country === <span class="hljs-string">'CN'</span> &amp;&amp; request.nextUrl.pathname === <span class="hljs-string">'/'</span>) {
    <span class="hljs-keyword">return</span> NextResponse.redirect(<span class="hljs-keyword">new</span> URL(<span class="hljs-string">'/cn'</span>, request.url));
  }

  <span class="hljs-keyword">return</span> response;
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> config = {
  matcher: <span class="hljs-string">'/((?!api|_next/static|_next/image|favicon.ico).*)'</span>,
};
</code></pre>
<h3 id="heading-3-isr-that-actually-works">3. ISR That Actually Works</h3>
<pre><code class="lang-typescript"><span class="hljs-comment">// app/blog/[slug]/page.tsx</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">generateStaticParams</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-comment">// Pre-build these pages</span>
  <span class="hljs-keyword">return</span> [
    { slug: <span class="hljs-string">'getting-started'</span> },
    { slug: <span class="hljs-string">'advanced-features'</span> }
  ];
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">BlogPost</span>(<span class="hljs-params">{ params }: { params: { slug: <span class="hljs-built_in">string</span> } }</span>) </span>{
  <span class="hljs-keyword">const</span> post = <span class="hljs-keyword">await</span> fetch(<span class="hljs-string">`https://api.blog.com/posts/<span class="hljs-subst">${params.slug}</span>`</span>, {
    next: { revalidate: <span class="hljs-number">3600</span> } <span class="hljs-comment">// Revalidate every hour</span>
  });

  <span class="hljs-keyword">return</span> &lt;article&gt;{<span class="hljs-comment">/* Your content */</span>}&lt;/article&gt;;
}
</code></pre>
<h2 id="heading-production-deployment-checklist">Production Deployment Checklist</h2>
<p>Before deploying to production, ensure:</p>
<pre><code class="lang-bash">✅ nodejs_compat flag is <span class="hljs-built_in">set</span>
✅ Environment variables are configured <span class="hljs-keyword">in</span> Cloudflare dashboard
✅ R2 bucket is created <span class="hljs-keyword">for</span> caching (optional)
✅ Custom domain is configured
✅ Preview deployments are working
</code></pre>
<h3 id="heading-deploy-command">Deploy Command</h3>
<pre><code class="lang-bash"><span class="hljs-comment"># Deploy to production</span>
npm run deploy -- --env production

<span class="hljs-comment"># Deploy preview</span>
npm run deploy -- --env preview
</code></pre>
<h2 id="heading-when-to-use-which-approach">When to Use Which Approach?</h2>
<h3 id="heading-use-cloudflarenext-on-pages-when">Use @cloudflare/next-on-pages when:</h3>
<ul>
<li><p>Your app only uses Web APIs</p>
</li>
<li><p>You need the absolute fastest cold starts</p>
</li>
<li><p>You're building a simple marketing site</p>
</li>
</ul>
<h3 id="heading-use-opennextjscloudflare-when">Use @opennextjs/cloudflare when:</h3>
<ul>
<li><p>You need Node.js APIs (crypto, fs, etc.)</p>
</li>
<li><p>You're using Prisma or other Node.js ORMs</p>
</li>
<li><p>You have existing Next.js apps to migrate</p>
</li>
<li><p>You want the full Next.js feature set</p>
</li>
</ul>
<h2 id="heading-the-future-is-here">The Future is Here</h2>
<p>With OpenNext adapter reaching v1.0, deploying production Next.js apps on Cloudflare is finally practical. You get:</p>
<ul>
<li><p><strong>True SSR</strong> with full Node.js support</p>
</li>
<li><p><strong>Global edge deployment</strong> from 280+ locations</p>
</li>
<li><p><strong>Generous free tier</strong> for getting started</p>
</li>
<li><p><strong>Seamless scaling</strong> when you grow</p>
</li>
</ul>
<p>Remember: You're not choosing between features and performance anymore. You can have both.</p>
<h2 id="heading-was-this-article-helpful-for-you-if-so-let-me-know-what-you-think-in-the-comment-section">Was this article helpful for you? If so, let me know what you think in the comment section.</h2>
]]></content:encoded></item><item><title><![CDATA[Do You Really Know the Difference Between L1, L2, and L3 CDK Constructs?]]></title><description><![CDATA[After you complete this article, you will have a solid understanding of:

What L1, L2, and L3 constructs actually are and when to use each

Why AWS created three different abstraction levels (and the hidden benefits)

How to avoid the most common CDK...]]></description><link>https://blog.yusadolat.me/do-you-really-know-the-difference-between-l1-l2-and-l3-cdk-constructs</link><guid isPermaLink="true">https://blog.yusadolat.me/do-you-really-know-the-difference-between-l1-l2-and-l3-cdk-constructs</guid><category><![CDATA[aws-cdk]]></category><category><![CDATA[#IaC]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Yusuf Adeyemo]]></dc:creator><pubDate>Sat, 26 Jul 2025 08:23:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1753383075914/53c111d8-3855-451d-980b-84067570681e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>After you complete this article, you will have a solid understanding of:</p>
<ul>
<li><p>What L1, L2, and L3 constructs actually are and when to use each</p>
</li>
<li><p>Why AWS created three different abstraction levels (and the hidden benefits)</p>
</li>
<li><p>How to avoid the most common CDK construct mistakes</p>
</li>
<li><p>When to break the rules and mix construct levels</p>
</li>
</ul>
<h2 id="heading-have-you-ever-been-confused-by-cdk-construct-levels">Have You Ever Been Confused by CDK Construct Levels?</h2>
<p>If you've ever started learning AWS CDK, you've probably encountered code like this and wondered why there are so many ways to create the same S3 bucket:</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// Wait, what? Three different ways to create a bucket? </span>
<span class="hljs-keyword">import</span> { CfnBucket } <span class="hljs-keyword">from</span> <span class="hljs-string">'aws-cdk-lib/aws-s3'</span>; 
<span class="hljs-keyword">import</span> { Bucket } <span class="hljs-keyword">from</span> <span class="hljs-string">'aws-cdk-lib/aws-s3'</span>; 
<span class="hljs-keyword">import</span> { StaticWebsite } <span class="hljs-keyword">from</span> <span class="hljs-string">'@aws-solutions-constructs/aws-s3-cloudfront'</span>;

<span class="hljs-comment">// Which one should I use? 🤔</span>
</code></pre>
<p>And then you see this error that makes you question everything:</p>
<pre><code class="lang-typescript"><span class="hljs-built_in">Error</span>: Cannot use property <span class="hljs-keyword">type</span> <span class="hljs-string">'BucketProps'</span> <span class="hljs-keyword">with</span> L1 construct <span class="hljs-string">'CfnBucket'</span>
</code></pre>
<p>"But they're both S3 buckets! Why can't I use the same properties?"</p>
<p>Let me help you understand these construct levels once and for all.  </p>
<h2 id="heading-what-are-cdk-constructs-anyway">What Are CDK Constructs Anyway?</h2>
<p>Think of CDK constructs as LEGO blocks for your cloud infrastructure. Just like LEGO has basic bricks, specialized pieces, and complete sets, CDK has three levels of constructs.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753382540736/15f4ff96-0d29-4aa8-a3a9-33841eee1e98.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-level-1-l1-constructs-the-raw-cloudformation-experience">Level 1 (L1) Constructs: The Raw CloudFormation Experience</h2>
<p>L1 constructs are the most basic building blocks. They start with <code>Cfn</code> (short for CloudFormation) and map directly to CloudFormation resources. No magic, no shortcuts.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { CfnBucket } <span class="hljs-keyword">from</span> <span class="hljs-string">'aws-cdk-lib/aws-s3'</span>;

<span class="hljs-keyword">const</span> bucket = <span class="hljs-keyword">new</span> CfnBucket(<span class="hljs-built_in">this</span>, <span class="hljs-string">'MyL1Bucket'</span>, {
  bucketName: <span class="hljs-string">'my-raw-bucket-2025'</span>,
  versioningConfiguration: {
    status: <span class="hljs-string">'Enabled'</span>
  },
  publicAccessBlockConfiguration: {
    blockPublicAcls: <span class="hljs-literal">true</span>,
    blockPublicPolicy: <span class="hljs-literal">true</span>,
    ignorePublicAcls: <span class="hljs-literal">true</span>,
    restrictPublicBuckets: <span class="hljs-literal">true</span>
  }
});
</code></pre>
<p>Notice how verbose this is? You have to configure EVERYTHING manually. It's like writing CloudFormation in TypeScript.</p>
<h3 id="heading-when-would-you-ever-use-l1-constructs">When Would You Ever Use L1 Constructs?</h3>
<ol>
<li><p><strong>Brand New AWS Services</strong> - When AWS releases a new service, L1 support comes first</p>
</li>
<li><p><strong>Debugging L2/L3 Issues</strong> - Sometimes you need to see what's really happening</p>
</li>
<li><p><strong>Migrating from CloudFormation</strong> - Direct 1:1 mapping makes migration easier</p>
</li>
<li><p><strong>Edge Cases</strong> - When you need a specific CloudFormation property not exposed in L2</p>
</li>
</ol>
<h2 id="heading-level-2-l2-constructs-the-sweet-spot">Level 2 (L2) Constructs: The Sweet Spot</h2>
<p>L2 constructs are what most developers use daily. They provide sensible defaults, helper methods, and hide complexity while still giving you control.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { Bucket, BucketEncryption } <span class="hljs-keyword">from</span> <span class="hljs-string">'aws-cdk-lib/aws-s3'</span>;

<span class="hljs-keyword">const</span> bucket = <span class="hljs-keyword">new</span> Bucket(<span class="hljs-built_in">this</span>, <span class="hljs-string">'MyL2Bucket'</span>, {
  bucketName: <span class="hljs-string">'my-friendly-bucket-2025'</span>,
  versioned: <span class="hljs-literal">true</span>,
  encryption: BucketEncryption.S3_MANAGED,
  removalPolicy: RemovalPolicy.DESTROY <span class="hljs-comment">// Much cleaner!</span>
});

<span class="hljs-comment">// Look at these helper methods! </span>
bucket.grantRead(myLambdaFunction);
bucket.addLifecycleRule({
  expiration: Duration.days(<span class="hljs-number">90</span>)
});
</code></pre>
<p>See the difference? L2 constructs:</p>
<ul>
<li><p>Use friendly property names (<code>versioned</code> vs <code>versioningConfiguration</code>)</p>
</li>
<li><p>Provide helper methods (<code>grantRead()</code>)</p>
</li>
<li><p>Set security best practices by default</p>
</li>
<li><p>Handle resource dependencies automatically</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753375905963/4b96438b-bc18-44cc-a503-583a594b90b4.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-level-3-l3-constructs-complete-solutions">Level 3 (L3) Constructs: Complete Solutions</h2>
<p>L3 constructs (also called patterns) are pre-built architectures for common use cases. They combine multiple resources into a working solution.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { StaticWebsite } <span class="hljs-keyword">from</span> <span class="hljs-string">'@aws-solutions-constructs/aws-s3-cloudfront'</span>;

<span class="hljs-keyword">const</span> website = <span class="hljs-keyword">new</span> StaticWebsite(<span class="hljs-built_in">this</span>, <span class="hljs-string">'MyWebsite'</span>, {
  websiteIndexDocument: <span class="hljs-string">'index.html'</span>,
  websiteErrorDocument: <span class="hljs-string">'error.html'</span>
});

<span class="hljs-comment">// That's it! You just created:</span>
<span class="hljs-comment">// - S3 bucket with proper website configuration</span>
<span class="hljs-comment">// - CloudFront distribution</span>
<span class="hljs-comment">// - Origin Access Identity</span>
<span class="hljs-comment">// - Proper IAM policies</span>
<span class="hljs-comment">// - HTTPS redirect</span>
<span class="hljs-comment">// - Security headers</span>
</code></pre>
<p>With just a few lines, you get a production-ready static website setup that would take hundreds of lines in L1.</p>
<h2 id="heading-common-mistakes-that-will-drive-you-crazy">Common Mistakes That Will Drive You Crazy</h2>
<h3 id="heading-mistake-1-mixing-property-types">Mistake #1: Mixing Property Types</h3>
<pre><code class="lang-typescript"><span class="hljs-comment">// 🚫 This won't work!</span>
<span class="hljs-keyword">const</span> bucket = <span class="hljs-keyword">new</span> CfnBucket(<span class="hljs-built_in">this</span>, <span class="hljs-string">'MyBucket'</span>, {
  encryption: BucketEncryption.S3_MANAGED <span class="hljs-comment">// L2 property type</span>
});

<span class="hljs-comment">// ✅ Use the correct L1 property type</span>
<span class="hljs-keyword">const</span> bucket = <span class="hljs-keyword">new</span> CfnBucket(<span class="hljs-built_in">this</span>, <span class="hljs-string">'MyBucket'</span>, {
  bucketEncryption: {
    serverSideEncryptionConfiguration: [{
      serverSideEncryptionByDefault: {
        sseAlgorithm: <span class="hljs-string">'AES256'</span>
      }
    }]
  }
});
</code></pre>
<h3 id="heading-mistake-2-assuming-l3-constructs-are-always-better">Mistake #2: Assuming L3 Constructs Are Always Better</h3>
<pre><code class="lang-typescript"><span class="hljs-comment">// Using L3 when you need specific customization</span>
<span class="hljs-keyword">const</span> website = <span class="hljs-keyword">new</span> StaticWebsite(<span class="hljs-built_in">this</span>, <span class="hljs-string">'MyWebsite'</span>, {
  <span class="hljs-comment">// Oh no! I can't set specific CloudFront behaviors</span>
  <span class="hljs-comment">// or custom cache policies here! 😱</span>
});

<span class="hljs-comment">// Sometimes L2 gives you more control</span>
<span class="hljs-keyword">const</span> bucket = <span class="hljs-keyword">new</span> Bucket(<span class="hljs-built_in">this</span>, <span class="hljs-string">'WebBucket'</span>);
<span class="hljs-keyword">const</span> distribution = <span class="hljs-keyword">new</span> CloudFrontWebDistribution(<span class="hljs-built_in">this</span>, <span class="hljs-string">'MyDist'</span>, {
  <span class="hljs-comment">// Full control over every setting</span>
});
</code></pre>
<h3 id="heading-mistake-3-not-using-escape-hatches">Mistake #3: Not Using Escape Hatches</h3>
<p>What if you need to modify an L2 construct's underlying L1 resource?</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> bucket = <span class="hljs-keyword">new</span> Bucket(<span class="hljs-built_in">this</span>, <span class="hljs-string">'MyBucket'</span>);

<span class="hljs-comment">// Access the L1 construct (escape hatch)</span>
<span class="hljs-keyword">const</span> cfnBucket = bucket.node.defaultChild <span class="hljs-keyword">as</span> CfnBucket;

<span class="hljs-comment">// Now you can set ANY CloudFormation property</span>
cfnBucket.analyticsConfigurations = [{
  id: <span class="hljs-string">'my-analytics'</span>,
  storageClassAnalysis: {
    dataExport: {
      destination: {
        bucketArn: <span class="hljs-string">'arn:aws:s3:::my-analytics-bucket'</span>
      }
    }
  }
}];
</code></pre>
<h2 id="heading-the-hidden-benefits-of-each-level">The Hidden Benefits of Each Level</h2>
<h3 id="heading-l1-benefits-you-didnt-know-about">L1 Benefits You Didn't Know About</h3>
<ol>
<li><p><strong>Immediate AWS Feature Support</strong> - No waiting for CDK updates</p>
</li>
<li><p><strong>CloudFormation Parity</strong> - Easy to convert existing templates</p>
</li>
<li><p><strong>Learning Tool</strong> - Understand what L2 constructs do under the hood</p>
</li>
</ol>
<h3 id="heading-l2-benefits-that-save-time">L2 Benefits That Save Time</h3>
<ol>
<li><p><strong>Automatic Security Defaults</strong> - Encryption enabled by default</p>
</li>
<li><p><strong>Cross-Service Integration</strong> - <code>grant*</code> methods handle IAM for you</p>
</li>
<li><p><strong>Type Safety</strong> - Catch errors at compile time, not deployment</p>
</li>
</ol>
<h3 id="heading-l3-benefits-for-real-projects">L3 Benefits for Real Projects</h3>
<ol>
<li><p><strong>Proven Architectures</strong> - AWS Solutions Constructs follow best practices</p>
</li>
<li><p><strong>Compliance Ready</strong> - Many patterns are pre-validated for security</p>
</li>
<li><p><strong>Rapid Prototyping</strong> - Get a working system in minutes</p>
</li>
</ol>
<h2 id="heading-creating-your-own-l3-construct">Creating Your Own L3 Construct</h2>
<p>Here's a practical example of creating your own pattern:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { Construct } <span class="hljs-keyword">from</span> <span class="hljs-string">'constructs'</span>;
<span class="hljs-keyword">import</span> { Bucket, BucketEncryption } <span class="hljs-keyword">from</span> <span class="hljs-string">'aws-cdk-lib/aws-s3'</span>;
<span class="hljs-keyword">import</span> { <span class="hljs-built_in">Function</span>, Runtime, Code } <span class="hljs-keyword">from</span> <span class="hljs-string">'aws-cdk-lib/aws-lambda'</span>;
<span class="hljs-keyword">import</span> * <span class="hljs-keyword">as</span> path <span class="hljs-keyword">from</span> <span class="hljs-string">'path'</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> SecureDataProcessor <span class="hljs-keyword">extends</span> Construct {
  <span class="hljs-keyword">public</span> <span class="hljs-keyword">readonly</span> bucket: Bucket;
  <span class="hljs-keyword">public</span> <span class="hljs-keyword">readonly</span> processor: <span class="hljs-built_in">Function</span>;

  <span class="hljs-keyword">constructor</span>(<span class="hljs-params">scope: Construct, id: <span class="hljs-built_in">string</span></span>) {
    <span class="hljs-built_in">super</span>(scope, id);

    <span class="hljs-comment">// Create encrypted bucket</span>
    <span class="hljs-built_in">this</span>.bucket = <span class="hljs-keyword">new</span> Bucket(<span class="hljs-built_in">this</span>, <span class="hljs-string">'DataBucket'</span>, {
      encryption: BucketEncryption.KMS_MANAGED,
      versioned: <span class="hljs-literal">true</span>,
      enforceSSL: <span class="hljs-literal">true</span>
    });

    <span class="hljs-comment">// Create processing Lambda</span>
    <span class="hljs-built_in">this</span>.processor = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Function</span>(<span class="hljs-built_in">this</span>, <span class="hljs-string">'Processor'</span>, {
      runtime: Runtime.NODEJS_18_X,
      handler: <span class="hljs-string">'index.handler'</span>,
      code: Code.fromAsset(path.join(__dirname, <span class="hljs-string">'lambda'</span>))
    });

    <span class="hljs-comment">// Wire them together</span>
    <span class="hljs-built_in">this</span>.bucket.grantRead(<span class="hljs-built_in">this</span>.processor);
    <span class="hljs-built_in">this</span>.bucket.addEventNotification(
      EventType.OBJECT_CREATED,
      <span class="hljs-keyword">new</span> LambdaDestination(<span class="hljs-built_in">this</span>.processor)
    );
  }
}

<span class="hljs-comment">// Now anyone can use your pattern!</span>
<span class="hljs-keyword">const</span> dataProcessor = <span class="hljs-keyword">new</span> SecureDataProcessor(<span class="hljs-built_in">this</span>, <span class="hljs-string">'MyProcessor'</span>);
</code></pre>
<h2 id="heading-when-to-use-each-construct-level">When to Use Each Construct Level</h2>
<p><strong>Use L1 when:</strong></p>
<ul>
<li><p>You need bleeding-edge AWS features</p>
</li>
<li><p>Migrating from CloudFormation</p>
</li>
<li><p>Debugging CDK issues</p>
</li>
<li><p>You need a specific CloudFormation property</p>
</li>
</ul>
<p><strong>Use L2 when:</strong></p>
<ul>
<li><p>Building most production applications</p>
</li>
<li><p>You want security best practices by default</p>
</li>
<li><p>You need to integrate multiple services</p>
</li>
<li><p>You value developer productivity</p>
</li>
</ul>
<p><strong>Use L3 when:</strong></p>
<ul>
<li><p>Implementing common patterns</p>
</li>
<li><p>Rapid prototyping</p>
</li>
<li><p>Enforcing organizational standards</p>
</li>
<li><p>You don't need heavy customization</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753382468675/08eb1a52-830b-48d3-b077-8b018f7f9c15.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-the-future-of-cdk-constructs">The Future of CDK Constructs</h2>
<p>AWS is continuously improving CDK constructs. New services get L1 support immediately through CloudFormation, L2 constructs follow within weeks or months, and the community creates L3 patterns for common use cases.</p>
<p>Remember: There's no "wrong" construct level. Each serves a purpose, and experienced CDK developers often mix levels within the same application.</p>
<h2 id="heading-was-this-article-helpful-for-you-if-so-kindly-subscribe-to-my-bimonthly-newsletter">Was this article helpful for you? If so, kindly subscribe to my bimonthly newsletter.</h2>
]]></content:encoded></item><item><title><![CDATA[Understanding PostgreSQL Row-Level Security Through pg_cron: A Practical Guide]]></title><description><![CDATA[Imagine this scenario: You have a multi-tenant PostgreSQL database where different teams or customers share the same schema. The last thing you want is one user accidentally (or maliciously) seeing another user’s data. That’s exactly where Row-Level ...]]></description><link>https://blog.yusadolat.me/understanding-postgresql-row-level-security-through-pgcron-a-practical-guide</link><guid isPermaLink="true">https://blog.yusadolat.me/understanding-postgresql-row-level-security-through-pgcron-a-practical-guide</guid><category><![CDATA[rds]]></category><category><![CDATA[PostgreSQL]]></category><category><![CDATA[#multitenancy]]></category><category><![CDATA[Security]]></category><dc:creator><![CDATA[Yusuf Adeyemo]]></dc:creator><pubDate>Fri, 24 Jan 2025 17:44:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1737740517202/d9a04000-ee53-481d-91ea-d11872b5e548.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Imagine this scenario: You have a multi-tenant PostgreSQL database where different teams or customers share the same schema. The last thing you want is one user accidentally (or maliciously) seeing another user’s data. That’s exactly where <strong>Row-Level Security (RLS)</strong> steps in, acting like an invisible bouncer for each row in your database.</p>
<p>In this article, we’ll explore RLS in PostgreSQL through a practical example involving <strong>pg_cron</strong>, a built-in job scheduling extension for PostgreSQL. We’ll walk through the benefits of this fine-grained security model, demonstrate how pg_cron leverages it, and highlight best practices to keep your database environment both secure and efficient.</p>
<hr />
<h2 id="heading-what-is-row-level-security-rls">What is Row-Level Security (RLS)?</h2>
<p>At its core, <strong>Row-Level Security</strong> is a PostgreSQL feature that allows you to enforce access policies at the most granular level: the row. Instead of trusting your application code to handle security logic, RLS shifts that responsibility to the database engine itself. Each time a user queries a table, RLS policies determine which rows they can view or modify—automatically and behind the scenes.</p>
<h3 id="heading-why-rls-matters">Why RLS Matters</h3>
<ol>
<li><p><strong>Multi-Tenant Isolation</strong>: Ideal for SaaS applications where multiple tenants share the same database.</p>
</li>
<li><p><strong>Reduced Risk</strong>: Minimizes data leaks caused by application bugs or misconfigurations.</p>
</li>
<li><p><strong>Cleaner Code</strong>: Moves security logic from the application layer to the database layer, making your code less cluttered.</p>
</li>
<li><p><strong>Less Overhead</strong>: Users only see the rows they have access to, with no extra logic needed in queries or controllers.</p>
</li>
</ol>
<hr />
<h2 id="heading-a-real-world-example-pgcron-and-rls">A Real-World Example: pg_cron and RLS</h2>
<p><img src="https://opengraph.githubassets.com/e317851361000e5b5df4698a88e85729da2dd5a48a68c4cd61c1d768f1c8f8dd/citusdata/pg_cron" alt="GitHub - citusdata/pg_cron: Run periodic jobs in PostgreSQL" /></p>
<p>To illustrate RLS, let’s look at <strong>pg_cron</strong>, PostgreSQL’s job scheduling extension. With pg_cron, you can schedule periodic tasks (like database backups or maintenance jobs) by storing job definitions inside dedicated tables.</p>
<h3 id="heading-pgcrons-key-tables">pg_cron’s Key Tables</h3>
<ul>
<li><p><strong>cron.job</strong>: Stores scheduled job definitions (think of it like a cron schedule entry).</p>
</li>
<li><p><strong>cron.job_run_details</strong>: Stores execution history for those jobs.</p>
</li>
</ul>
<p>By default, pg_cron uses RLS to ensure that each user can only manage or view the jobs they’ve created.</p>
<hr />
<h2 id="heading-default-rls-policies-in-pgcron">Default RLS Policies in pg_cron</h2>
<p>Here’s a peek at how pg_cron’s built-in RLS policies look:</p>
<pre><code class="lang-sql"><span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">POLICY</span> cron_job_policy <span class="hljs-keyword">ON</span> cron.job 
    <span class="hljs-keyword">USING</span> (username = <span class="hljs-keyword">CURRENT_USER</span>);

<span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">POLICY</span> cron_job_run_details_policy <span class="hljs-keyword">ON</span> cron.job_run_details 
    <span class="hljs-keyword">USING</span> (username = <span class="hljs-keyword">CURRENT_USER</span>);
</code></pre>
<p>These policies effectively say: <em>“Only show rows where</em> <code>username</code> matches the currently logged-in user.” It’s a simple, yet powerful way to ensure user separation in a multi-tenant or multi-user environment.</p>
<hr />
<h2 id="heading-practical-implementation">Practical Implementation</h2>
<p>Let’s walk through some hands-on steps to see how RLS and pg_cron work together.</p>
<h3 id="heading-1-enabling-row-level-security">1. Enabling Row-Level Security</h3>
<p>By default, PostgreSQL requires you to enable RLS on a table before policies take effect. In pg_cron, this is often done for you, but if you ever need to do it manually:</p>
<pre><code class="lang-sql"><span class="hljs-keyword">ALTER</span> <span class="hljs-keyword">TABLE</span> cron.job <span class="hljs-keyword">ENABLE</span> <span class="hljs-keyword">ROW</span> <span class="hljs-keyword">LEVEL</span> <span class="hljs-keyword">SECURITY</span>;
<span class="hljs-keyword">ALTER</span> <span class="hljs-keyword">TABLE</span> cron.job_run_details <span class="hljs-keyword">ENABLE</span> <span class="hljs-keyword">ROW</span> <span class="hljs-keyword">LEVEL</span> <span class="hljs-keyword">SECURITY</span>;
</code></pre>
<h3 id="heading-2-creating-a-scheduled-job">2. Creating a Scheduled Job</h3>
<p>When a user creates a job—say, a daily VACUUM ANALYZE—it automatically gets tagged with their identity. For example:</p>
<pre><code class="lang-sql"><span class="hljs-keyword">SELECT</span> cron.schedule(<span class="hljs-string">'daily-backup'</span>, <span class="hljs-string">'0 0 * * *'</span>, <span class="hljs-string">'VACUUM ANALYZE'</span>);
</code></pre>
<p>The RLS policy ensures the job’s row is “owned” by the user who created it. When another user queries the <code>cron.job</code> table, they won’t see this entry.</p>
<h3 id="heading-3-viewing-scheduled-jobs">3. Viewing Scheduled Jobs</h3>
<p>Because of the RLS policy, each user sees only their own rows:</p>
<pre><code class="lang-sql"><span class="hljs-comment">-- Logged in as User1:</span>
<span class="hljs-keyword">SELECT</span> * <span class="hljs-keyword">FROM</span> cron.job;
<span class="hljs-comment">-- Result: Only User1’s jobs</span>

<span class="hljs-comment">-- Logged in as User2:</span>
<span class="hljs-keyword">SELECT</span> * <span class="hljs-keyword">FROM</span> cron.job;
<span class="hljs-comment">-- Result: Only User2’s jobs</span>
</code></pre>
<h3 id="heading-4-administrator-access">4. Administrator Access</h3>
<p>What if you’re an admin and need to see <em>every</em> user’s job? You can create a policy that grants full visibility to superusers or a specific admin role:</p>
<pre><code class="lang-sql"><span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">POLICY</span> admin_cron_job_policy <span class="hljs-keyword">ON</span> cron.job 
    <span class="hljs-keyword">USING</span> (
      username = <span class="hljs-keyword">CURRENT_USER</span> 
      <span class="hljs-keyword">OR</span> <span class="hljs-keyword">CURRENT_USER</span> <span class="hljs-keyword">IN</span> (<span class="hljs-keyword">SELECT</span> rolname <span class="hljs-keyword">FROM</span> pg_roles <span class="hljs-keyword">WHERE</span> rolsuper)
    );
</code></pre>
<p>With this in place, admins can bypass the default policy and see all rows in <code>cron.job</code>.</p>
<hr />
<h2 id="heading-common-scenarios-and-solutions">Common Scenarios and Solutions</h2>
<h3 id="heading-scenario-1-read-only-access-to-all-jobs">Scenario 1: Read-Only Access to All Jobs</h3>
<p>You might have a monitoring role that needs to view all scheduled jobs but not modify them. Here’s how to give them read-only access:</p>
<pre><code class="lang-sql"><span class="hljs-comment">-- Create a read-only policy for the monitor role:</span>
<span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">POLICY</span> monitor_cron_job_policy <span class="hljs-keyword">ON</span> cron.job
    <span class="hljs-keyword">FOR</span> <span class="hljs-keyword">SELECT</span>
    <span class="hljs-keyword">TO</span> monitor_role
    <span class="hljs-keyword">USING</span> (<span class="hljs-literal">true</span>);

<span class="hljs-comment">-- Grant SELECT permissions on the cron.job table:</span>
<span class="hljs-keyword">GRANT</span> <span class="hljs-keyword">SELECT</span> <span class="hljs-keyword">ON</span> cron.job <span class="hljs-keyword">TO</span> monitor_role;
</code></pre>
<h3 id="heading-scenario-2-team-based-access">Scenario 2: Team-Based Access</h3>
<p>In some organizations, teams need to share access to each other’s jobs while still isolating from other groups. You can implement a team-based policy, assuming a separate table (e.g., <code>user_teams</code>) stores team associations:</p>
<pre><code class="lang-sql"><span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">POLICY</span> team_cron_job_policy <span class="hljs-keyword">ON</span> cron.job
    <span class="hljs-keyword">USING</span> (
      team_id = (
        <span class="hljs-keyword">SELECT</span> team_id 
        <span class="hljs-keyword">FROM</span> user_teams 
        <span class="hljs-keyword">WHERE</span> username = <span class="hljs-keyword">CURRENT_USER</span>
      )
    );
</code></pre>
<hr />
<h2 id="heading-best-practices">Best Practices</h2>
<ol>
<li><p><strong>Test Your Policies Thoroughly</strong>: Run queries under different user roles to confirm that policies behave as intended.</p>
</li>
<li><p><strong>Document Everything</strong>: Clear documentation on who can see and do what saves you headaches later.</p>
</li>
<li><p><strong>Conduct Regular Audits</strong>: Periodically review logs and access patterns to ensure policies are still aligned with your security needs.</p>
</li>
<li><p><strong>Include Policies in Backups</strong>: Policies are part of your schema. Make sure they’re included in any disaster recovery strategy.</p>
</li>
</ol>
<hr />
<h2 id="heading-common-pitfalls">Common Pitfalls</h2>
<ol>
<li><p><strong>Performance Considerations</strong>: Very complex or large sets of RLS policies can affect query performance. Keep an eye on your query plans.</p>
</li>
<li><p><strong>Overlapping Policies</strong>: Multiple policies can interact in unexpected ways. Always test for unintended overlaps.</p>
</li>
<li><p><strong>Maintenance Overhead</strong>: More policies mean more complexity. Review them regularly to ensure they’re still necessary.</p>
</li>
</ol>
<hr />
<h2 id="heading-monitoring-and-troubleshooting">Monitoring and Troubleshooting</h2>
<h3 id="heading-viewing-active-policies">Viewing Active Policies</h3>
<p>If you ever need a bird’s-eye view of all active RLS policies:</p>
<pre><code class="lang-sql"><span class="hljs-keyword">SELECT</span> schemaname, tablename, policyname, <span class="hljs-keyword">roles</span>, cmd, qual 
<span class="hljs-keyword">FROM</span> pg_policies 
<span class="hljs-keyword">WHERE</span> schemaname = <span class="hljs-string">'cron'</span>;
</code></pre>
<h3 id="heading-debugging-access-issues">Debugging Access Issues</h3>
<ul>
<li><p><strong>Check Permissions</strong>:</p>
<pre><code class="lang-sql">  <span class="hljs-keyword">SELECT</span> has_table_privilege(<span class="hljs-string">'username'</span>, <span class="hljs-string">'cron.job'</span>, <span class="hljs-string">'SELECT'</span>);
</code></pre>
</li>
<li><p><strong>Examine Query Plans</strong>:</p>
<pre><code class="lang-sql">  <span class="hljs-keyword">EXPLAIN</span> (<span class="hljs-keyword">ANALYZE</span>) <span class="hljs-keyword">SELECT</span> * <span class="hljs-keyword">FROM</span> cron.job;
</code></pre>
<p>  This helps you see if policies are being applied and how they affect performance.</p>
</li>
</ul>
<hr />
<h2 id="heading-conclusion">Conclusion</h2>
<p><strong>Row-Level Security</strong> is a game-changer for multi-tenant or multi-user PostgreSQL databases. By integrating RLS with <strong>pg_cron</strong>, you get a firsthand look at how PostgreSQL enforces strict data isolation at the row level—automatically filtering out data that a user shouldn’t see.</p>
<p>Whether you’re managing a small startup or a large enterprise environment, RLS helps you sleep easier by ensuring each user’s data remains exactly where it should: out of sight for everyone else. Pair this with careful monitoring, thorough testing, and clear documentation, and you’ve got a powerful, secure setup that keeps your database environment running smoothly.</p>
<hr />
<p>Feel free to explore these resources for more in-depth information. As always, happy coding and scheduling!</p>
]]></content:encoded></item><item><title><![CDATA[How to Pay AWS Bills in Naira: A Quick Guide]]></title><description><![CDATA[With AWS now supporting Naira, you can skip juggling foreign exchange and just focus on building. Here’s how to set it up:
1. Log In to Your AWS Account
Head to the AWS Console and sign in as usual.
2. Go to Billing and Cost Management
You’ll find th...]]></description><link>https://blog.yusadolat.me/how-to-pay-aws-bills-in-naira-a-quick-guide</link><guid isPermaLink="true">https://blog.yusadolat.me/how-to-pay-aws-bills-in-naira-a-quick-guide</guid><category><![CDATA[AWS]]></category><category><![CDATA[billing]]></category><category><![CDATA[Nigeria]]></category><dc:creator><![CDATA[Yusuf Adeyemo]]></dc:creator><pubDate>Tue, 14 Jan 2025 20:03:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1736886434413/dcd07ced-223e-495b-90c9-8c290cca3203.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>With AWS now supporting Naira, you can skip juggling foreign exchange and just focus on building. Here’s how to set it up:</p>
<p>1. <strong>Log In to Your AWS Account</strong></p>
<p>Head to the AWS Console and sign in as usual.</p>
<p>2. <strong>Go to Billing and Cost Management</strong></p>
<p>You’ll find this under the main menu.</p>
<p>3. <strong>Preferences and Settings</strong></p>
<p>Once there, look for an option labeled <em>Payment Preferences</em>.</p>
<p>4. <strong>Edit Payment Currency</strong></p>
<p>Click <strong>Edit</strong>, then pick <strong>Nigerian Naira (NGN)</strong> from the dropdown.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736887052069/c005c7a7-005a-4c79-8a8c-0636582d560f.png" alt class="image--center mx-auto" /></p>
<p>5. <strong>Save Changes</strong></p>
<p>Congrats, future invoices will be issued in Naira!</p>
<p><strong>Why This Matters</strong></p>
<p>• <strong>No More FX Fees</strong> – You’re not burning extra cash on currency conversions.</p>
<p>• <strong>Local Payment Ease</strong> – It’s simpler to manage local transactions and budgets.</p>
<p>• <strong>Aligns with the Lagos Local Zone</strong> – Perfect if you’re leveraging AWS’s local zone in Nigeria.</p>
<p>That’s it! It’s quick, it’s easy, and it saves you from fiddling with exchange rates. Now you can invest the difference in testing, automations, or that next big idea.</p>
<p>Peace to all, and happy building!</p>
]]></content:encoded></item><item><title><![CDATA[Nomad 101: The Simpler, Smarter Way to Orchestrate Applications]]></title><description><![CDATA[Nomad is a personal favorite when I need a straightforward, single-binary orchestrator that just works. It’s built by HashiCorp, the folks behind Terraform and Vault, and it takes a minimalistic approach to scheduling and managing containerized (and ...]]></description><link>https://blog.yusadolat.me/nomad-101-the-simpler-smarter-way-to-orchestrate-applications</link><guid isPermaLink="true">https://blog.yusadolat.me/nomad-101-the-simpler-smarter-way-to-orchestrate-applications</guid><category><![CDATA[nomad]]></category><category><![CDATA[hashicorp]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Applications]]></category><dc:creator><![CDATA[Yusuf Adeyemo]]></dc:creator><pubDate>Tue, 31 Dec 2024 17:34:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1735666331605/324991da-7d3e-49d6-ae6f-07cc1befa7bf.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Nomad is a personal favorite when I need a straightforward, single-binary orchestrator that just works. It’s built by HashiCorp, the folks behind Terraform and Vault, and it takes a minimalistic approach to scheduling and managing containerized (and even non-containerized) workloads. Nomad might be the perfect fit if you’ve ever felt that Kubernetes is overkill for a simpler workload.</p>
<p>In this post, I’ll walk you through installing Nomad, spinning it up in a small environment, and running a workload to see it in action. By the end, you’ll have a solid hands-on feel for how to use Nomad. Let’s dive right in.</p>
<hr />
<h2 id="heading-why-nomad">Why Nomad?</h2>
<p>For me, Nomad offers a couple of killer advantages:</p>
<ol>
<li><p><strong>Simplicity</strong>: Nomad is a single, self-contained binary that can manage containers, VMs, and standalone applications. Configuration is straightforward and uses HCL (HashiCorp Configuration Language), which you might already know from Terraform.</p>
</li>
<li><p><strong>Low Overhead</strong>: In contrast to something like Kubernetes, which demands multiple components (etcd, kube-scheduler, kube-apiserver, etc.), Nomad keeps the architecture lean, meaning fewer moving parts and less operational complexity.</p>
</li>
<li><p><strong>Scale Without the Bloat</strong>: Just because it’s simple doesn’t mean it’s small-time. Nomad can run at massive scale. Start small on a single node, then grow into a cluster as your needs evolve.</p>
</li>
<li><p><strong>Broad Workload Support</strong>: Containerized apps are the norm these days, but if you have legacy apps or specialized workloads, Nomad accommodates them too. This flexibility makes it easier to transition older systems into orchestrated environments without rewriting everything.</p>
</li>
</ol>
<hr />
<h2 id="heading-setting-up-nomad-locally">Setting Up Nomad Locally</h2>
<p>Let’s talk about setting up a Nomad environment on your local machine for a quick test. I’ll assume you’re running on some flavor of Linux or macOS. If you’re on Windows, you can still follow along using WSL2 or a VM.</p>
<ol>
<li><p><strong>Download Nomad</strong><br /> Head over to the official <a target="_blank" href="https://www.nomadproject.io/downloads">Nomad Releases page</a> and download the appropriate binary. Extract it, move it to a directory in your PATH (like <code>/usr/local/bin</code>), and you’re good to go. For instance:</p>
<pre><code class="lang-bash"> wget https://releases.hashicorp.com/nomad/&lt;version&gt;/nomad_&lt;version&gt;_linux_amd64.zip
 unzip nomad_&lt;version&gt;_linux_amd64.zip
 sudo mv nomad /usr/<span class="hljs-built_in">local</span>/bin/
 nomad version
</code></pre>
<p> You’ll see Nomad’s version printed out if everything is correct.</p>
</li>
<li><p><strong>Development Agent</strong><br /> Nomad has a “dev” mode, which is a single-process setup that runs the server and client in one go—perfect for local testing. Simply run:</p>
<pre><code class="lang-bash"> nomad agent -dev
</code></pre>
<p> This command starts Nomad in development mode and spawns a web UI on <a target="_blank" href="http://localhost:4646/">http://localhost:4646</a>. If you navigate there, you’ll see the Nomad dashboard with your single node.  </p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735665526848/0baf8fb9-4ddb-40de-ae53-4fc46470d3d5.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<hr />
<h2 id="heading-nomad-architecture-at-a-glance">Nomad Architecture at a Glance</h2>
<p>In a more robust setup, Nomad is typically deployed as a <strong>cluster</strong> of server nodes and client nodes:</p>
<ul>
<li><p><strong>Server nodes</strong> handle scheduling decisions and maintain cluster state.</p>
</li>
<li><p><strong>Client nodes</strong> run workloads assigned to them by the servers.</p>
</li>
</ul>
<p>But for now, dev mode is all we need. Later on, you could spin up a 3-node server cluster with as many clients as you want.</p>
<hr />
<h2 id="heading-running-your-first-nomad-job">Running Your First Nomad Job</h2>
<p>A Nomad “job” describes what you want to run, how many instances, resource constraints, etc. Jobs are written in HCL, so it’ll feel familiar if you’ve ever used Terraform. Let’s do a quick example by running a Docker-based web server.</p>
<h3 id="heading-basic-hcl-job-file">Basic HCL Job File</h3>
<p>Create a file called <code>nginx.nomad</code>:</p>
<pre><code class="lang-plaintext">job "nginx-web" {
  datacenters = ["dc1"]
  type        = "service"

  group "web-group" {
    count = 1

    task "web" {
      driver = "docker"

      config {
        image = "nginx:latest"
        ports = ["http"]
      }

      resources {
        cpu    = 100
        memory = 128
      }
    }

    network {
      port "http" {
        static = 8080
      }
    }
  }
}
</code></pre>
<p>Let’s break it down:</p>
<ul>
<li><p><strong>job "nginx-web"</strong>: Defines our job name and type. We’re calling it a “service” because it’s a long-running service.</p>
</li>
<li><p><strong>group "web-group"</strong>: A group can contain multiple tasks that share resources and networking. Here, we only have one task.</p>
</li>
<li><p><strong>task "web"</strong>: Tells Nomad to run an Nginx container. We specify the <strong>docker</strong> driver.</p>
</li>
<li><p><strong>network</strong>: Maps the container’s port to a static host port (8080 in this case) so you can access it on <code>localhost:8080</code>.</p>
</li>
</ul>
<h3 id="heading-run-the-job">Run the Job</h3>
<p>Launch it with:</p>
<pre><code class="lang-bash">nomad job run nginx.nomad
</code></pre>
<p>Nomad will parse the file, create the job, and schedule it on the local dev client. If all goes well, you’ll see output indicating the job has been placed.</p>
<h3 id="heading-verify-its-running">Verify It’s Running</h3>
<p>Head to your browser at <a target="_blank" href="http://localhost:4646/">http://localhost:4646</a> and click on “Jobs.” You should see <code>nginx-web</code> running. Now try <a target="_blank" href="http://localhost:8080/">http://localhost:8080</a> in your browser. Nginx’s default “Welcome” page means it’s working!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735665966729/5c3660c2-e5fa-445a-b2a3-124dc6a6e4b0.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-scaling-the-service">Scaling the Service</h2>
<p>Nomad makes scaling super easy. Just update the <code>count</code> parameter in your job file. For instance, change it to:</p>
<pre><code class="lang-plaintext">count = 2
</code></pre>
<p>Then run:</p>
<pre><code class="lang-bash">nomad job run nginx.nomad
</code></pre>
<p>Nomad will place an additional instance of the container, though in dev mode you’re still on a single node, so you’ll have multiple containers on the same host. In a multi-node cluster, Nomad automatically figures out which clients have room.</p>
<hr />
<h2 id="heading-stopping-or-updating-the-job">Stopping or Updating the Job</h2>
<p>If you want to stop the job, you can run:</p>
<pre><code class="lang-bash">nomad job stop nginx-web
</code></pre>
<p>For updates, just modify the HCL file (like changing the Docker image to a different version), then re-run <code>nomad job run nginx.nomad</code>. Nomad will handle rolling updates gracefully, spinning up new tasks before shutting down old ones (as long as you specify appropriate update stanzas).</p>
<hr />
<h2 id="heading-integrating-with-other-hashicorp-tools">Integrating with Other HashiCorp Tools</h2>
<p>Because Nomad shares the same style of configuration as Terraform and the same developer DNA as Vault and Consul, it’s easy to create an entire stack:</p>
<ul>
<li><p><strong>Consul</strong> for service discovery and dynamic DNS.</p>
</li>
<li><p><strong>Vault</strong> for secrets management.</p>
</li>
<li><p><strong>Terraform</strong> for provisioning the underlying infrastructure.</p>
</li>
</ul>
<p>Nomad can automatically register services with Consul, making them discoverable to other services in your environment. Storing secrets in Vault means you can dynamically inject credentials into your Nomad jobs. It all plays nicely together.</p>
<hr />
<h2 id="heading-why-i-use-nomad-over-alternatives">Why I Use Nomad Over Alternatives</h2>
<p>I’ve used Kubernetes for years, but Nomad is my go-to when:</p>
<ol>
<li><p><strong>Speed of Setup</strong>: Nomad dev mode is unbelievably quick. One binary, one command, done.</p>
</li>
<li><p><strong>Fewer Dependencies</strong>: I don’t need etcd or a separate container runtime beyond Docker. Less to break, less to learn.</p>
</li>
<li><p><strong>Flexibility</strong>: I can run Docker tasks, raw exec tasks, or even handle batch jobs and system workloads in a single cluster.</p>
</li>
</ol>
<p>Don’t get me wrong Kubernetes excels in large, complex ecosystems. But if you want a more lightweight orchestrator or have a hybrid mix of containerized and legacy apps, Nomad’s a breath of fresh air.</p>
<hr />
<h2 id="heading-pro-tips-anticipating-what-you-might-need-next">Pro Tips (Anticipating What You Might Need Next)</h2>
<ol>
<li><p><strong>High Availability</strong>: If you plan to run in production, spin up at least three Nomad server nodes. That ensures if one server goes down, the cluster can still schedule workloads.</p>
</li>
<li><p><strong>Autopilot</strong>: Nomad’s built-in autopilot features let you automatically manage upgrades, Raft snapshots, and more to keep the cluster healthy.</p>
</li>
<li><p><strong>Authentication and ACLs</strong>: In a multi-user setup, you can integrate Nomad’s ACL system to restrict who can submit jobs or read cluster data.</p>
</li>
<li><p><strong>Plugins</strong>: There are driver plugins for everything from Docker to QEMU to AWS ECS tasks. You can run basically anything that can be launched from a command line or third-party tool.</p>
</li>
<li><p><strong>Monitoring</strong>: Nomad exposes metrics that are easy to integrate with Prometheus, Grafana, or whatever your favorite monitoring stack is.</p>
</li>
</ol>
<hr />
<h2 id="heading-final-thoughts">Final Thoughts</h2>
<p>Nomad may look unassuming as a single binary, but don’t let that fool you. It’s a robust orchestrator that simplifies complex workload management. Whether you’re prototyping a new service, gradually migrating from manual server management, or just want to avoid the overhead of a full-fledged Kubernetes stack, Nomad can handle it.</p>
<p>Why not give it a shot in your own environment? If you’ve got that messy monolith or a small container workload, Nomad might be exactly the tool you need to keep everything running smoothly without drowning in complexity.</p>
<hr />
<h3 id="heading-sources">Sources</h3>
<ul>
<li><p><a target="_blank" href="https://developer.hashicorp.com/nomad/docs">Nomad Official Documentation</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/hashicorp/nomad">Nomad GitHub Repository</a></p>
</li>
</ul>
<p>Hope this helps you get started with Nomad. Let me know if you run into any snags or come up with a clever integration I’m always interested in new ways to push this awesome orchestrator.</p>
]]></content:encoded></item><item><title><![CDATA[How I Leverage Raspberry Pi as a DevOps Engineer]]></title><description><![CDATA[As a DevOps engineer, I’m always looking for a cost-effective, reliable, and flexible way to prototype new ideas without overcommitting infrastructure resources. Sure, spinning up EC2 instances or provisioning dedicated hardware works, but when you w...]]></description><link>https://blog.yusadolat.me/how-i-leverage-raspberry-pi-as-a-devops-engineer</link><guid isPermaLink="true">https://blog.yusadolat.me/how-i-leverage-raspberry-pi-as-a-devops-engineer</guid><category><![CDATA[Raspberry Pi]]></category><category><![CDATA[Devops]]></category><category><![CDATA[AI]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[NAS storage solutions]]></category><category><![CDATA[Microservices]]></category><dc:creator><![CDATA[Yusuf Adeyemo]]></dc:creator><pubDate>Mon, 16 Dec 2024 11:00:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1734345129231/41518d44-c7a0-4dd2-80e9-2076c0a9a1f7.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<p>As a DevOps engineer, I’m always looking for a cost-effective, reliable, and flexible way to prototype new ideas without overcommitting infrastructure resources. Sure, spinning up EC2 instances or provisioning dedicated hardware works, but when you want a low-power, low-cost sandbox, the <strong>Raspberry Pi</strong> is hard to beat. It’s an affordable, credit-card-sized computing powerhouse that helps me test concepts, automate environments, and even experiment with local AI without racking up unnecessary cloud fees or dealing with heavy metal servers.</p>
<p>In this post, I’ll share some real-world ways I integrate Raspberry Pi devices into my workflow. If you’ve never considered them as part of a professional DevOps toolkit, I hope this gives you a few reasons to start</p>
<hr />
<h3 id="heading-what-is-a-raspberry-pi">What is a Raspberry Pi?</h3>
<p>If you’ve never touched one, think of the Raspberry Pi as a tiny Linux-based computer board with just enough CPU, RAM, and storage to run a surprising range of workloads. It’s been wildly popular among hobbyists, educators, and professionals alike. Thanks to its Linux roots, you can tap into a massive ecosystem of software, scripting, containers, and automation tools that feel instantly familiar to anyone from a DevOps background.</p>
<hr />
<h3 id="heading-why-raspberry-pi-fits-my-needs">Why Raspberry Pi Fits My Needs</h3>
<p>As a DevOps engineer, I’ve got plenty of choices for running workloads. But the Pi hits a sweet spot when I need something quick, cheap, and on-prem:</p>
<ul>
<li><p><strong>Cost-Effective</strong>: For the price of a mid-tier cloud instance running a few weeks, I can own a Pi outright and reuse it a million times over.</p>
</li>
<li><p><strong>Energy-Efficient</strong>: A Pi draws minimal power, so I can keep it running 24/7 without worrying about my electric bill.</p>
</li>
<li><p><strong>Exceptionally Versatile</strong>: It’s a lab in a box—CI/CD runners, IoT hubs, mini Kubernetes clusters, AI inferencing boxes, local proxies, you name it.</p>
</li>
</ul>
<hr />
<h3 id="heading-how-i-use-raspberry-pi-in-my-workflow">How I Use Raspberry Pi in My Workflow</h3>
<h4 id="heading-1-prototyping-and-experimental-builds">1. Prototyping and Experimental Builds</h4>
<p>When experimenting with a new microservice, pipeline, or integration, I often spin it up on a Pi first. This gives me a stable, always-on environment to validate code, run Docker containers, test APIs, and refine configurations. It’s a great way to ensure my code and infrastructure definitions hold up before I commit cloud spend.</p>
<h4 id="heading-2-home-automation-and-iot-management">2. Home Automation and IoT Management</h4>
<p>I like to say that my home is my first “production” environment. Using a Pi as a hub, paired with something like <strong>Home Assistant</strong> I manage a network of sensors, lights, and other IoT devices. Not only is it fun, but it also lets me practice edge automation. This experience often translates back into my professional work, where edge computing scenarios are becoming more common.</p>
<h4 id="heading-3-self-hosted-github-actions-runners">3. Self-Hosted GitHub Actions Runners</h4>
<p>If you’ve worked with GitHub Actions, you know that hosted runners can quickly rack up costs or queue times. By using a Pi as a self-hosted runner, I keep certain build and test pipelines local and cost-controlled. Best of all, I have full control over the environment and dependencies, making it easy to debug issues right in my home office.</p>
<h4 id="heading-4-local-ai-experiments">4. Local AI Experiments</h4>
<p>While you won’t train GPT-4 on a Raspberry Pi, it’s still possible to run smaller models like Google’s Gemma2 (2 billion parameters) for inference tasks. This is a great way to experiment with local AI workloads or test model-serving pipelines without relying on GPU-backed cloud instances. It’s not going to replace a beefy workstation, but it’s enough to poke around with models and APIs before deciding to scale up.</p>
<h4 id="heading-5-network-attached-storage-nas-and-local-file-serving">5. Network-Attached Storage (NAS) and Local File Serving</h4>
<p>If I need a quick-and-dirty NAS solution, I can set up a Pi with Samba or OpenMediaVault, attach some external storage, and voilà: a lightweight NAS on my local network. It’s not enterprise-level, but it’s perfect for stashing logs, artifacts, or just sharing files among devices at home.</p>
<hr />
<h3 id="heading-why-it-works-for-me">Why It Works for Me</h3>
<p>Raspberry Pi devices are more than just cheap boards; they represent a frictionless approach to experimentation. Instead of spending hours setting up cloud VMs or maintaining bulky servers, I have a small fleet of Pis that act as a test bed for ideas. They let me:</p>
<ul>
<li><p>Quickly spin up and tear down environments on a budget.</p>
</li>
<li><p>Learn and iterate with minimal risk.</p>
</li>
<li><p>Scale horizontally by adding more boards when I need them.</p>
</li>
<li><p>Develop intuition for edge, IoT, and ARM-based workloads.</p>
</li>
</ul>
<hr />
<h3 id="heading-final-thoughts">Final Thoughts</h3>
<p>The Raspberry Pi offers a unique blend of accessibility, affordability, and versatility. Whether I’m refining a new CI pipeline, tinkering with home automation, or trialing lightweight AI inference, the Pi is my go-to platform for hands-on exploration. It’s a genuine force multiplier that’s expanded the way I think about infrastructure and small-scale deployments.</p>
<p><strong>What about you? How have you put Raspberry Pi to work? If you’ve got a unique use case or a clever hack, let me know. I’m always looking for fresh ways to push these tiny boards to their limits.</strong></p>
<hr />
]]></content:encoded></item><item><title><![CDATA[TDD vs BDD: Navigating the Testing Landscape in Modern Software Development]]></title><description><![CDATA[Introduction
In the ever-evolving world of software development, testing methodologies play a crucial role in ensuring the quality and reliability of applications. Two prominent approaches that have gained significant traction in recent years are Tes...]]></description><link>https://blog.yusadolat.me/tdd-vs-bdd-navigating-the-testing-landscape-in-modern-software-development</link><guid isPermaLink="true">https://blog.yusadolat.me/tdd-vs-bdd-navigating-the-testing-landscape-in-modern-software-development</guid><category><![CDATA[Tutorial]]></category><category><![CDATA[Testing]]></category><category><![CDATA[software development]]></category><dc:creator><![CDATA[Yusuf Adeyemo]]></dc:creator><pubDate>Tue, 27 Aug 2024 08:00:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1724736512707/45dc6e3b-6ce5-4509-bd5b-607d00d3ea23.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>In the ever-evolving world of software development, testing methodologies play a crucial role in ensuring the quality and reliability of applications. Two prominent approaches that have gained significant traction in recent years are Test-Driven Development (TDD) and Behavior-Driven Development (BDD). While both methodologies share some common ground, they each bring unique perspectives to the testing process. This article delves into the intricacies of TDD and BDD, exploring their benefits, key differences, and how they can be effectively implemented in software projects.</p>
<h2 id="heading-table-of-contents">Table of Contents</h2>
<ol>
<li><p><a class="post-section-overview" href="#understanding-test-driven-development-tdd">Understanding Test-Driven Development (TDD)</a></p>
<ul>
<li><p><a class="post-section-overview" href="#the-tdd-process">The TDD Process</a></p>
</li>
<li><p><a class="post-section-overview" href="#benefits-of-tdd">Benefits of TDD</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#exploring-behavior-driven-development-bdd">Exploring Behavior-Driven Development (BDD)</a></p>
<ul>
<li><p><a class="post-section-overview" href="#key-features-of-bdd">Key Features of BDD</a></p>
</li>
<li><p><a class="post-section-overview" href="#bdd-scenario-examples">BDD Scenario Examples</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#comparing-tdd-and-bdd">Comparing TDD and BDD</a></p>
<ul>
<li><p><a class="post-section-overview" href="#shared-benefits">Shared Benefits</a></p>
</li>
<li><p><a class="post-section-overview" href="#focus-and-approach">Focus and Approach</a></p>
</li>
</ul>
</li>
<li><p><a class="post-section-overview" href="#implementing-tdd-and-bdd-in-your-projects">Implementing TDD and BDD in Your Projects</a></p>
</li>
<li><p><a class="post-section-overview" href="#conclusion">Conclusion</a></p>
</li>
</ol>
<h2 id="heading-understanding-test-driven-development-tdd">Understanding Test-Driven Development (TDD)</h2>
<p>Test-Driven Development is a software development process that relies on the repetition of short development cycles. This methodology encourages simple designs and instills confidence in the code by ensuring that every piece of functionality is thoroughly tested.</p>
<h3 id="heading-the-tdd-process">The TDD Process</h3>
<p>The TDD process follows a specific cycle:</p>
<ol>
<li><p>Write a test for a new feature before implementing the code.</p>
</li>
<li><p>Run the new test to verify that it fails (as expected).</p>
</li>
<li><p>Write the minimum amount of code necessary to make the test pass.</p>
</li>
<li><p>Run all tests to ensure the new code passes without breaking existing functionality.</p>
</li>
<li><p>Refactor the code to improve its structure and remove any duplication.</p>
</li>
<li><p>Repeat the cycle for each new feature or functionality.</p>
</li>
</ol>
<h3 id="heading-benefits-of-tdd">Benefits of TDD</h3>
<ul>
<li><p>Encourages simple, modular designs</p>
</li>
<li><p>Provides immediate feedback on code correctness</p>
</li>
<li><p>Builds a comprehensive suite of unit tests</p>
</li>
<li><p>Improves code quality and reduces bugs</p>
</li>
<li><p>Facilitates easier refactoring and maintenance</p>
</li>
</ul>
<h2 id="heading-exploring-behavior-driven-development-bdd">Exploring Behavior-Driven Development (BDD)</h2>
<p>Behavior-Driven Development is an agile software development process that extends the principles of TDD. BDD emphasizes collaboration among developers, quality assurance professionals, and business partners to create a shared understanding of how an application should behave.</p>
<h3 id="heading-key-features-of-bdd">Key Features of BDD</h3>
<ul>
<li><p>Utilizes domain-specific scripting languages (DSLs)</p>
</li>
<li><p>Defines user behavior in simple English</p>
</li>
<li><p>Converts English descriptions into automated test scripts</p>
</li>
<li><p>Focuses on the behavior of the application from an end-user perspective</p>
</li>
</ul>
<h3 id="heading-bdd-scenario-examples">BDD Scenario Examples</h3>
<p>BDD often uses scenario-based descriptions to define expected behavior. For example:</p>
<pre><code class="lang-dockerfile">Scenario: <span class="hljs-keyword">User</span> adds an item to their shopping cart
  Given the <span class="hljs-keyword">user</span> is on the product details page
  When the <span class="hljs-keyword">user</span> selects a size <span class="hljs-string">"Medium"</span>
  And the <span class="hljs-keyword">user</span> clicks the <span class="hljs-string">"Add to Cart"</span> button
  Then the item should be added to the <span class="hljs-keyword">user</span><span class="hljs-string">'s shopping cart
  And the cart total should increase by 1
  And the user should see a confirmation message "Item added to cart"</span>
</code></pre>
<h2 id="heading-comparing-tdd-and-bdd">Comparing TDD and BDD</h2>
<p>While TDD and BDD share some common ground, they have distinct focuses and approaches:</p>
<h3 id="heading-shared-benefits">Shared Benefits</h3>
<p>Both TDD and BDD offer several advantages to development teams:</p>
<ul>
<li><p>Early detection of errors in requirements</p>
</li>
<li><p>Improved communication between team members</p>
</li>
<li><p>Reduced overall development costs</p>
</li>
<li><p>Higher code quality and fewer bugs</p>
</li>
</ul>
<h3 id="heading-focus-and-approach">Focus and Approach</h3>
<ul>
<li><p>TDD focuses on the functionality of individual components</p>
</li>
<li><p>BDD emphasizes the behavior of the application from a user's perspective</p>
</li>
<li><p>TDD tests are typically written in the same programming language as the application</p>
</li>
<li><p>BDD tests are often written in a more accessible, natural language format</p>
</li>
</ul>
<h2 id="heading-implementing-tdd-and-bdd-in-your-projects">Implementing TDD and BDD in Your Projects</h2>
<p>To successfully implement TDD or BDD in your software projects:</p>
<ol>
<li><p>Choose the appropriate methodology based on your project's needs and team structure</p>
</li>
<li><p>Invest in training and tools to support the chosen approach</p>
</li>
<li><p>Start small and gradually expand the use of TDD or BDD across your projects</p>
</li>
<li><p>Regularly review and refine your testing processes</p>
</li>
<li><p>Foster a culture of collaboration and continuous improvement</p>
</li>
</ol>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Both Test-Driven Development and Behavior-Driven Development offer valuable approaches to software testing and development. By understanding the strengths and differences of each methodology, development teams can make informed decisions about which approach best suits their projects. Whether you choose TDD, BDD, or a combination of both, implementing these methodologies can lead to higher quality software, improved team collaboration, and more satisfied end-users.</p>
<hr />
<p><em>References:</em></p>
<ol>
<li>Beck, K. (2002). Test-Driven Development: By Example. Addison-Wesley Professional.t</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Enhancing Microservice Communication in AWS ECS with Service Discovery Techniques]]></title><description><![CDATA[Service discovery is a vital component of modern distributed systems, enabling seamless communication and dynamic scaling in environments where services frequently change IPs, ports, or even hosts. AWS Elastic Container Service (ECS) integrates seaml...]]></description><link>https://blog.yusadolat.me/enhancing-microservice-communication-in-aws-ecs-with-service-discovery-techniques</link><guid isPermaLink="true">https://blog.yusadolat.me/enhancing-microservice-communication-in-aws-ecs-with-service-discovery-techniques</guid><category><![CDATA[AWS]]></category><category><![CDATA[Service Discovery]]></category><dc:creator><![CDATA[Yusuf Adeyemo]]></dc:creator><pubDate>Sun, 11 Feb 2024 08:48:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1707641199419/c35c7a58-fa3e-42e3-9682-a2509fb76096.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Service discovery is a vital component of modern distributed systems, enabling seamless communication and dynamic scaling in environments where services frequently change IPs, ports, or even hosts. AWS Elastic Container Service (ECS) integrates seamlessly with service discovery mechanisms, simplifying the deployment and operation of microservices architectures. In this comprehensive guide, we delve into the intricacies of service discovery within ECS, ensuring your applications remain resilient and scalable.</p>
<p><strong>Understanding Service Discovery</strong></p>
<p>At its core, service discovery facilitates the dynamic detection and interaction among services in a distributed ecosystem. The challenge lies in the fluid nature of these services, which may traverse across different environments, necessitating a flexible approach to maintain connectivity. Service discovery transcends the limitations of static configurations, allowing services to communicate based on logical identifiers rather than hard-coded network addresses.</p>
<p><strong>The Role of Service Discovery in ECS</strong></p>
<p>Amazon ECS simplifies service discovery by leveraging AWS Cloud Map, a fully managed service registry that automates the discovery of ECS services. Cloud Map enables your applications to discover resources by name, eliminating the need for manual IP management or service configuration. This abstraction not only enhances flexibility but also significantly reduces the overhead associated with deploying and managing microservices.</p>
<p><strong>Implementing Service Discovery in ECS</strong></p>
<p>To utilize service discovery in ECS, you begin by registering your services with AWS Cloud Map. This process involves creating a namespace, which serves as a container for all service instances. Within this namespace, you register service names that correspond to your ECS services. Each service can then be discovered through its logical name, streamlining the interaction between different components of your application.</p>
<p><strong>Practical Example: Integrating Service Discovery with ECS</strong></p>
<p>Consider a scenario where you have a microservices architecture with a front-end service needing to communicate with a back-end service. Instead of hardcoding the back-end service's IP address, you register both services with Cloud Map under a common namespace, say <code>myapp.local</code>. The back-end service registers itself with the name <code>backend.myapp.local</code>. The front-end service, needing to send a request to the back-end, queries Cloud Map for <code>backend.myapp.local</code> and receives the current IP address and port of the back-end service. This mechanism ensures that even if the back-end service is redeployed or its IP changes, the front-end can always discover and communicate with it without any manual intervention.</p>
<p><strong>Best Practices for Service Discovery with ECS</strong></p>
<ul>
<li><p><strong>Automate Service Registration:</strong> Ensure your ECS services are automatically registered with Cloud Map upon deployment. This can be achieved through ECS task definitions or service configurations.</p>
</li>
<li><p><strong>Health Checks:</strong> Utilize Cloud Map's health checking capabilities to automatically remove unhealthy service instances from the registry. This ensures your applications always connect to operational instances.</p>
</li>
<li><p><strong>Security:</strong> Implement appropriate IAM policies to control access to the service discovery system, ensuring only authorized services can register or discover other services.</p>
</li>
</ul>
<p><strong>Conclusion</strong></p>
<p>Service discovery is a cornerstone of modern distributed systems, ensuring applications remain resilient and adaptable to changing environments. By integrating AWS ECS with Cloud Map, developers can significantly simplify the discovery process, allowing services to dynamically interact regardless of their underlying infrastructure. This guide provides a foundation for leveraging service discovery within your ECS deployments, paving the way for more efficient and scalable applications.</p>
<p>Incorporating service discovery into your ECS strategy not only optimizes communication between services but also enhances overall application resilience. By following the practices outlined above, you can create a robust ecosystem where services seamlessly discover and interact with each other, driving efficiency and scalability across your deployments.  </p>
<p>Thank you for reading this article! If you enjoyed it and would like to stay up to date on the latest technical articles and insights, I invite you to subscribe to my newsletter. By subscribing, you'll be the first to know when we publish new articles and you'll have access to exclusive content and resources.</p>
]]></content:encoded></item><item><title><![CDATA[NodeJS Graceful Shutdown: A Beginner's Guide]]></title><description><![CDATA[Imagine this scenario: Your Node.js application is happily running, processing requests, interacting with databases, and then suddenly, it gets terminated. The system administrator decided it was time to scale down, or perhaps a critical error forced...]]></description><link>https://blog.yusadolat.me/nodejs-graceful-shutdown-a-beginners-guide</link><guid isPermaLink="true">https://blog.yusadolat.me/nodejs-graceful-shutdown-a-beginners-guide</guid><category><![CDATA[Node.js]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Beginner Developers]]></category><dc:creator><![CDATA[Yusuf Adeyemo]]></dc:creator><pubDate>Tue, 23 May 2023 13:52:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1684849762082/7d9b77d7-5896-49bd-8df4-4652103adc8f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Imagine this scenario: Your Node.js application is happily running, processing requests, interacting with databases, and then suddenly, it gets terminated. The system administrator decided it was time to scale down, or perhaps a critical error forced the application to exit. In any case, the application was in the middle of processing requests, writing data to a file, and now all of that is abruptly stopped. What happens to that data? What happens to your users' requests? The consequences of an abrupt shutdown can range from minor inconveniences to significant data loss, and degraded user experience. To avoid these situations, it is important to shut down your applications gracefully. In this article, we'll discuss why graceful shutdowns are important, how to handle them in Node.js applications, particularly in the context of Docker, and the potential issues that could arise if not handled correctly.</p>
<h2 id="heading-prerequisites"><strong>Prerequisites</strong></h2>
<p>Before proceeding, you should have:</p>
<ul>
<li><p>Basic knowledge of JavaScript and Node.js</p>
</li>
<li><p>Understanding of Express.js framework</p>
</li>
<li><p>Familiarity with Docker and its basic commands</p>
</li>
</ul>
<h2 id="heading-what-is-a-graceful-shutdown"><strong>What is a Graceful Shutdown?</strong></h2>
<p>A graceful shutdown involves carefully handling the shutdown signal, completing the in-progress tasks, closing the active connections, and then finally allowing the application to terminate. This ensures that the system resources are properly freed and that the application does not exit while it's in the middle of important tasks.</p>
<h2 id="heading-why-is-a-graceful-shutdown-important"><strong>Why is a Graceful Shutdown Important?</strong></h2>
<p>Handling shutdown signals in your applications allows you to manage resources properly, provide a better user experience, and help your system degrade more gracefully. Not handling these signals can lead to issues like data loss or corruption, incomplete transactions, resource leaks, and unexpected behavior.</p>
<h2 id="heading-implementing-graceful-shutdown-in-nodejs"><strong>Implementing Graceful Shutdown in Node.js</strong></h2>
<p>In this section, we'll walk through the code required to listen for shutdown signals in a Node.js application and how to perform cleanup tasks before allowing the application to exit.</p>
<h2 id="heading-listening-for-shutdown-signals"><strong>Listening for Shutdown Signals</strong></h2>
<p>In Node.js, we can listen to process-level signals, such as <code>SIGINT</code> and <code>SIGTERM</code>. These signals are emitted when the process is requested to shut down, whether by manual user interruption (<code>SIGINT</code> from Ctrl+C) or system-level termination (<code>SIGTERM</code> from Docker or another process manager). To listen for these signals, we can use the <code>process.on</code> method and provide a callback function that will be executed when these signals are received.</p>
<pre><code class="lang-typescript">process.on(<span class="hljs-string">'SIGTERM'</span>, <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'SIGTERM signal received.'</span>);
});

process.on(<span class="hljs-string">'SIGINT'</span>, <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'SIGINT signal received.'</span>);
});
</code></pre>
<h2 id="heading-performing-cleanup"><strong>Performing Cleanup</strong></h2>
<p>Once a shutdown signal is received, it's important to perform necessary cleanup tasks to close any open resources, finish transactions, and prepare the application for a graceful exit. This may involve closing database connections, completing any in-progress writes to file systems, or other application-specific cleanup. This cleanup code should be placed inside the callback function provided to <code>process.on</code>.</p>
<pre><code class="lang-typescript">process.on(<span class="hljs-string">'SIGTERM'</span>, <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'SIGTERM signal received.'</span>);
  <span class="hljs-comment">// Perform cleanup tasks here</span>
});

process.on(<span class="hljs-string">'SIGINT'</span>, <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'SIGINT signal received.'</span>);
  <span class="hljs-comment">// Perform cleanup tasks here</span>
});
</code></pre>
<p>Remember to handle asynchronous cleanup tasks correctly. If a cleanup task is asynchronous (like closing a database connection), you'll need to handle it with async/await or Promises to ensure it completes before the process exits.</p>
<h2 id="heading-exiting-the-process"><strong>Exiting the Process</strong></h2>
<p>After performing the necessary cleanup tasks, we must manually terminate the Node.js process by calling <code>process.exit()</code>. This signals to the system (or Docker) that our application has finished shutting down. We can provide an exit code to this method; a code of 0 indicates a successful exit, while any other number indicates an error occurred. Typically, if we've handled everything correctly in our cleanup, we'll want to exit with a code of 0.</p>
<pre><code class="lang-typescript">process.on(<span class="hljs-string">'SIGTERM'</span>, <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'SIGTERM signal received.'</span>);
  <span class="hljs-comment">// Perform cleanup tasks here</span>

  process.exit(<span class="hljs-number">0</span>);
});

process.on(<span class="hljs-string">'SIGINT'</span>, <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'SIGINT signal received.'</span>);
  <span class="hljs-comment">// Perform cleanup tasks here</span>

  process.exit(<span class="hljs-number">0</span>);
});
</code></pre>
<p>Remember, the goal of all this is to allow your application to exit gracefully when it receives a shutdown signal. This helps reduce the risk of data corruption, loss of data, and other issues associated with an abrupt termination.</p>
<h2 id="heading-handling-shutdown-in-expressjs-application"><strong>Handling Shutdown in Express.js Application</strong></h2>
<p>Express.js applications, in particular, have some unique considerations when it comes to graceful shutdowns. This section discusses how to handle shutdown signals in an Express.js application, including how to stop the server from accepting new connections and how to ensure all existing connections are closed before shutdown.</p>
<h2 id="heading-creating-and-starting-the-server"><strong>Creating and Starting the Server</strong></h2>
<p>First, we need to create an Express.js application and start a server. Once the server is created, we can use it to close existing connections when we're ready to shut down the application.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> express <span class="hljs-keyword">from</span> <span class="hljs-string">'express'</span>;

<span class="hljs-keyword">const</span> app = express();
<span class="hljs-keyword">const</span> server = app.listen(<span class="hljs-number">3000</span>, <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Server listening on port 3000'</span>);
});
</code></pre>
<h2 id="heading-listening-for-shutdown-signals-1"><strong>Listening for Shutdown Signals</strong></h2>
<p>Just like in a basic Node.js application, we need to listen for <code>SIGINT</code> and <code>SIGTERM</code> signals. We can use the <code>process.on</code> method to add listeners for these signals. Inside the callback function for each listener, we'll call <code>server.close()</code> to stop the server from accepting new connections and to begin the process of shutting down.</p>
<pre><code class="lang-typescript">process.on(<span class="hljs-string">'SIGTERM'</span>, <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'SIGTERM signal received.'</span>);
  server.close(<span class="hljs-function">() =&gt;</span> {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Closed out remaining connections'</span>);
    <span class="hljs-comment">// Additional cleanup tasks go here</span>
  });
});

process.on(<span class="hljs-string">'SIGINT'</span>, <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'SIGINT signal received.'</span>);
  server.close(<span class="hljs-function">() =&gt;</span> {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Closed out remaining connections'</span>);
    <span class="hljs-comment">// Additional cleanup tasks go here</span>
  });
});
</code></pre>
<h2 id="heading-closing-existing-connections"><strong>Closing Existing Connections</strong></h2>
<p>When <code>server.close()</code> is called, the server stops accepting new connections and waits for all existing connections to close. The function that we pass to <code>server.close()</code> will be called once all connections are closed. This is where we can perform any additional cleanup tasks that need to happen before the application shuts down.</p>
<h2 id="heading-performing-additional-cleanup"><strong>Performing Additional Cleanup</strong></h2>
<p>Depending on the needs of your application, you may have additional cleanup tasks that need to happen when your application shuts down. For example, if you have a database connection, you should close it before your application exits. This cleanup code should be placed inside the callback function that you pass to <code>server.close()</code>.</p>
<pre><code class="lang-typescript">process.on(<span class="hljs-string">'SIGTERM'</span>, <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'SIGTERM signal received.'</span>);
  server.close(<span class="hljs-function">() =&gt;</span> {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Closed out remaining connections'</span>);
    <span class="hljs-comment">// Additional cleanup tasks go here, e.g., close database connection</span>
    process.exit(<span class="hljs-number">0</span>);
  });
});

process.on(<span class="hljs-string">'SIGINT'</span>, <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'SIGINT signal received.'</span>);
  server.close(<span class="hljs-function">() =&gt;</span> {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Closed out remaining connections'</span>);
    <span class="hljs-comment">// Additional cleanup tasks go here, e.g., close database connection</span>
    process.exit(<span class="hljs-number">0</span>);
  });
});
</code></pre>
<h2 id="heading-handling-shutdown-in-a-dockerized-nodejs-application"><strong>Handling Shutdown in a Dockerized Node.js Application</strong></h2>
<p>When running a Node.js application in a Docker container, there are additional considerations to take into account. This section discusses how Docker sends shutdown signals and how to ensure your Node.js application can handle them correctly.</p>
<h2 id="heading-understanding-docker-shutdown-signals"><strong>Understanding Docker Shutdown Signals</strong></h2>
<p>When Docker is asked to stop a running container, it sends a <code>SIGTERM</code> signal to the main process running in the container. This is Docker's way of asking the process to shut down gracefully, by finishing what it's currently doing, cleaning up as needed, and then terminating.</p>
<p>However, if the process does not terminate within a certain period (10 seconds by default), Docker will then send a <code>SIGKILL</code> signal to forcibly terminate the process. This is akin to pulling the plug on the application - it won't have a chance to finish what it's doing or clean up.</p>
<p>This is why our Node.js application needs to listen for and handle the <code>SIGTERM</code> signal, as we discussed in previous sections. By handling <code>SIGTERM</code>, our application can ensure it shuts down gracefully when Docker asks it to stop.</p>
<h2 id="heading-adjusting-dockers-grace-period"><strong>Adjusting Docker's Grace Period</strong></h2>
<p>Sometimes, our application may need more than 10 seconds to shut down gracefully. For example, it might need to finish processing a long-running request, or it might need to wait for a database transaction to commit.</p>
<p>In such cases, we can tell Docker to wait longer before it sends the <code>SIGKILL</code> signal, by using the <code>--stop-timeout</code> option when we run our Docker container. This option takes several seconds as its argument.</p>
<p>For example, to start a Docker container and give it 30 seconds to shut down gracefully before forcibly killing it, we would use a command like this:</p>
<pre><code class="lang-dockerfile">docker <span class="hljs-keyword">run</span><span class="bash"> --stop-timeout 30 my-nodejs-app</span>
</code></pre>
<p>Keep in mind that while extending the stop timeout can help in some situations, it's not a panacea. If your application consistently takes a long time to shut down, it may be a sign that it needs to be optimized or refactored.</p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>Handling shutdown signals in your Node.js applications allows you to manage resources properly, reduce potential data loss or corruption, provide a better user experience, and more. By understanding how to handle these signals, you can make your applications more robust and reliable, both in development and in production.</p>
]]></content:encoded></item><item><title><![CDATA[Do Not Tolerate Flaky Tests. Fix Them (or Delete Them).]]></title><description><![CDATA[As a DevOps engineer, you know the importance of testing in ensuring the quality and reliability of your software. However, not all tests are created equal, and some tests are more reliable than others. Flaky tests are tests that fail intermittently,...]]></description><link>https://blog.yusadolat.me/do-not-tolerate-flaky-tests-fix-them-or-delete-them</link><guid isPermaLink="true">https://blog.yusadolat.me/do-not-tolerate-flaky-tests-fix-them-or-delete-them</guid><category><![CDATA[Testing]]></category><category><![CDATA[flaky]]></category><dc:creator><![CDATA[Yusuf Adeyemo]]></dc:creator><pubDate>Mon, 17 Apr 2023 09:04:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1681722209856/c6756382-7fab-4f6a-9f05-8bdbc2dcec8b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As a DevOps engineer, you know the importance of testing in ensuring the quality and reliability of your software. However, not all tests are created equal, and some tests are more reliable than others. Flaky tests are tests that fail intermittently, even though the code being tested is not broken. These tests can be frustrating to deal with and can waste valuable time and resources. In this article, we will discuss why you should not tolerate flaky tests and what you can do to fix or delete them.</p>
<p>Why You Should Not Tolerate Flaky Tests</p>
<p>Flaky tests can cause several problems that affect the quality and reliability of your software. Here are some reasons why you should not tolerate flaky tests:</p>
<ol>
<li><p>Flaky tests can make it difficult to identify real bugs: Flaky tests can mask real bugs in your code, making it difficult to identify and fix them.</p>
</li>
<li><p>Flaky tests can waste valuable time and resources: Flaky tests can consume valuable time and resources that could be better spent on other tasks.</p>
</li>
<li><p>Flaky tests can erode confidence in your tests: Flaky tests can erode confidence in your tests and make it difficult to trust the results.</p>
</li>
<li><p>Flaky tests can lead to false positives: Flaky tests can lead to false positives, which can cause unnecessary rework and delays.</p>
</li>
</ol>
<p>What You Can Do to Fix or Delete Flaky Tests</p>
<p>Fixing or deleting flaky tests can help you avoid the problems caused by flaky tests. Here are some things you can do to fix or delete flaky tests:</p>
<ol>
<li><p>Identify the root cause of the flakiness: To fix flaky tests, you need to identify the root cause of the flakiness. This can be done by analyzing the test results and identifying patterns.</p>
</li>
<li><p>Fix the root cause of the flakiness: Once you have identified the root cause of the flakiness, you can fix it. This may involve modifying the test code, the test environment, or the application code.</p>
</li>
<li><p>Delete the flaky tests: If you are unable to fix the root cause of the flakiness, you may need to delete the flaky tests. This can help you avoid the problems caused by flaky tests.</p>
</li>
<li><p>Prioritize fixing flaky tests: Fixing flaky tests should be a priority. You should allocate the necessary time and resources to fix or delete flaky tests.</p>
</li>
</ol>
<p>Conclusion</p>
<p>Flaky tests can cause several problems that affect the quality and reliability of your software. To avoid these problems, you should not tolerate flaky tests and should fix or delete them as soon as possible. By identifying the root cause of the flakiness and fixing it or deleting the flaky tests, you can improve the reliability and quality of your software. Remember, a reliable and trustworthy test suite is crucial for the success of your DevOps pipeline.</p>
<p>We hope this article has been helpful in understanding why you should not tolerate flaky tests and what you can do to fix or delete them. If you have any questions or comments, feel free to leave them below.</p>
]]></content:encoded></item><item><title><![CDATA[How to resolve AWS S3 CORS error]]></title><description><![CDATA[The error message you're seeing is due to the Cross-Origin Resource Sharing (CORS) policy on your AWS S3 bucket. This policy determines who can access your bucket's contents from a different domain. In my case, it seems the policy is not allowing the...]]></description><link>https://blog.yusadolat.me/how-to-resolve-aws-s3-cors-error</link><guid isPermaLink="true">https://blog.yusadolat.me/how-to-resolve-aws-s3-cors-error</guid><category><![CDATA[AWS]]></category><category><![CDATA[S3]]></category><dc:creator><![CDATA[Yusuf Adeyemo]]></dc:creator><pubDate>Fri, 31 Mar 2023 07:40:33 GMT</pubDate><content:encoded><![CDATA[<p>The error message you're seeing is due to the Cross-Origin Resource Sharing (CORS) policy on your AWS S3 bucket. This policy determines who can access your bucket's contents from a different domain. In my case, it seems the policy is not allowing the local server (<a target="_blank" href="http://localhost:3001">http://localhost:3001</a>) to access the resources.</p>
<p>To resolve this issue, you need to update the CORS policy for your S3 bucket. Here's a step-by-step guide:</p>
<ol>
<li><p>Sign in to the AWS Management Console and open the Amazon S3 console at <a target="_blank" href="https://console.aws.amazon.com/s3/">https://console.aws.amazon.com/s3/</a>.</p>
</li>
<li><p>In the bucket list, choose the name of the bucket that you want to add a CORS policy to.</p>
</li>
<li><p>Choose the 'Permissions' tab.</p>
</li>
<li><p>Scroll down to the 'Cross-origin resource sharing (CORS)' section and choose 'Edit'.</p>
</li>
<li><p>In the CORS configuration editor, add a new CORS rule. For example:</p>
</li>
</ol>
<pre><code class="lang-json">[
    {
        <span class="hljs-attr">"AllowedHeaders"</span>: [<span class="hljs-string">"*"</span>],
        <span class="hljs-attr">"AllowedMethods"</span>: [<span class="hljs-string">"GET"</span>, <span class="hljs-string">"PUT"</span>, <span class="hljs-string">"POST"</span>, <span class="hljs-string">"DELETE"</span>],
        <span class="hljs-attr">"AllowedOrigins"</span>: [<span class="hljs-string">"http://localhost:3001"</span>],
        <span class="hljs-attr">"ExposeHeaders"</span>: []
    }
]
</code></pre>
<ol>
<li>Choose 'Save'.</li>
</ol>
<p>This policy allows your local server (<a target="_blank" href="http://localhost:3001">http://localhost:3001</a>) to perform GET, PUT, POST, and DELETE operations on your S3 bucket. Please adjust the policy according to your needs.</p>
<p>Remember, CORS policies can pose a security risk if not configured properly. Only allow access to trusted domains and use the strictest settings that your application allows.</p>
]]></content:encoded></item><item><title><![CDATA[Asynchronous ML Model Training with AWS Lambda Invocation]]></title><description><![CDATA[Imagine being tasked with the challenge of training machine learning (ML) models in a serverless environment at work. Your boss expects a fast, efficient, and scalable solution to this problem. A key requirement is for the primary Lambda function to ...]]></description><link>https://blog.yusadolat.me/asynchronous-ml-model-training-with-aws-lambda-invocation</link><guid isPermaLink="true">https://blog.yusadolat.me/asynchronous-ml-model-training-with-aws-lambda-invocation</guid><category><![CDATA[AWS]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[asynchronous]]></category><dc:creator><![CDATA[Yusuf Adeyemo]]></dc:creator><pubDate>Thu, 23 Mar 2023 14:59:19 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1679583498829/aa57ed7d-11dc-4166-a0c1-0b6819e24ae8.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Imagine being tasked with the challenge of training machine learning (ML) models in a serverless environment at work. Your boss expects a fast, efficient, and scalable solution to this problem. A key requirement is for the primary Lambda function to return a response immediately after triggering the model training process, without waiting for the training to complete. Sounds like a tall order, doesn't it? Fortunately, AWS Lambda has got your back. In this article, we'll explore how Lambda invocation can be utilized to solve this issue by asynchronously calling another Lambda function using Python. We'll also demonstrate how the child function can receive data from the parent function.</p>
<h3 id="heading-why-use-lambda-invocation">Why Use Lambda Invocation?</h3>
<p>AWS Lambda is a serverless compute service that allows you to run your code without provisioning or managing servers. Lambda invocation is a powerful feature that enables one Lambda function to call another, allowing you to offload tasks, process events in parallel, and set up complex workflows. In our case, invoking another Lambda function to train ML models enables the primary Lambda function to return a response immediately, ensuring that the main process doesn't get blocked waiting for the training job to finish.</p>
<h3 id="heading-how-to-invoke-a-lambda-function-with-python">How to Invoke a Lambda Function with Python</h3>
<p>To invoke a Lambda function from another, you'll first need the AWS SDK for Python, Boto3. Make sure you have it installed by running:</p>
<pre><code class="lang-python">pip install boto3
</code></pre>
<p>Next, you need to set up the necessary permissions to allow the parent Lambda function to call the child Lambda function. In the AWS Management Console, navigate to the IAM role assigned to the parent Lambda function and attach the <code>AWSLambdaRole</code> policy.</p>
<p>Now let's create a simple parent Lambda function that will invoke the child Lambda function. The parent function will use Boto3's <code>invoke</code> method to make an asynchronous call to the child function.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> boto3
<span class="hljs-keyword">import</span> json

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">lambda_handler</span>(<span class="hljs-params">event, context</span>):</span>
    lambda_client = boto3.client(<span class="hljs-string">'lambda'</span>)

    <span class="hljs-comment"># Replace 'child-function-name' with the actual name of your child Lambda function</span>
    response = lambda_client.invoke(
        FunctionName=<span class="hljs-string">'child-function-name'</span>,
        InvocationType=<span class="hljs-string">'Event'</span>,  <span class="hljs-comment"># Asynchronous invocation</span>
        Payload=json.dumps(event)
    )

    <span class="hljs-keyword">return</span> {
        <span class="hljs-string">'statusCode'</span>: <span class="hljs-number">200</span>,
        <span class="hljs-string">'body'</span>: <span class="hljs-string">'Model training job started.'</span>
    }
</code></pre>
<p>Here's a sample child Lambda function that trains an ML model using the data passed from the parent function:  </p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> json

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">lambda_handler</span>(<span class="hljs-params">event, context</span>):</span>
    <span class="hljs-comment"># Load data from the event object sent by the parent Lambda function</span>
    training_data = json.loads(event[<span class="hljs-string">'body'</span>])

    <span class="hljs-comment"># Train your ML model here using the training_data</span>
    <span class="hljs-comment"># ...</span>

    <span class="hljs-keyword">return</span> {
        <span class="hljs-string">'statusCode'</span>: <span class="hljs-number">200</span>,
        <span class="hljs-string">'body'</span>: <span class="hljs-string">'Model training completed.'</span>
    }
</code></pre>
<p>In this example, the parent Lambda function sends the input event as a payload to the child function, which then extracts the data and uses it for training the ML model.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>As demonstrated in this article, using AWS Lambda invocation to asynchronously call another Lambda function is a powerful technique that can solve complex use cases such as initiating ML model training without waiting for the process to complete. By leveraging this approach, you can achieve faster response times, better scalability, and an efficient serverless architecture for your applications. The Python examples provided serve as a solid foundation to build upon for your specific use case, allowing you to focus on the core logic of your application rather than infrastructure management.</p>
]]></content:encoded></item><item><title><![CDATA[Terraform Data Sources: Your Key to Dynamic and Adaptable Infrastructure]]></title><description><![CDATA[Terraform is a popular infrastructure-as-code (IaC) tool that allows users to define and manage their infrastructure using code. Terraform uses a declarative language called HCL to define infrastructure resources, and it supports a wide variety of cl...]]></description><link>https://blog.yusadolat.me/terraform-data-sources-your-key-to-dynamic-and-adaptable-infrastructure</link><guid isPermaLink="true">https://blog.yusadolat.me/terraform-data-sources-your-key-to-dynamic-and-adaptable-infrastructure</guid><category><![CDATA[Terraform]]></category><dc:creator><![CDATA[Yusuf Adeyemo]]></dc:creator><pubDate>Fri, 24 Feb 2023 16:56:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1677257587446/161e7755-27ef-4ec0-80ba-8746a7a7b183.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Terraform is a popular infrastructure-as-code (IaC) tool that allows users to define and manage their infrastructure using code. Terraform uses a declarative language called HCL to define infrastructure resources, and it supports a wide variety of cloud providers and other infrastructure services.</p>
<p>One key aspect of using Terraform effectively is understanding the various available data sources. In this article, we'll explore what data sources are, how they work, and how you can use them in your Terraform code to create a more dynamic and flexible infrastructure.</p>
<h3 id="heading-what-are-data-sources-in-terraform">What are Data Sources in Terraform?</h3>
<p>Data sources in Terraform allow you to make use of external resources that are not part of your configuration. Information about a resource, such as its ID or IP address, can be retrieved from a data source and then used in your Terraform script.</p>
<p>Data sources are useful in a variety of scenarios. For example, you may want to create a security group in AWS that allows traffic from a specific IP address range. To do this, you need to know the public IP address of the machine that will be accessing the security group. You can use a data source to retrieve the current public IP address and then use that value in your security group configuration.</p>
<h3 id="heading-how-do-data-sources-work-in-terraform">How do Data Sources Work in Terraform?</h3>
<p>Data sources are defined in your Terraform code using a special block type called "data". The "data" block allows you to specify the type of data source you want to use, as well as any necessary parameters or filters.</p>
<p>Here's an example of a data source block that retrieves information about an AWS EC2 instance:</p>
<pre><code class="lang-kotlin"><span class="hljs-keyword">data</span> <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"example-server"</span> {
  instance_id = <span class="hljs-string">"i-0123456789abcdef0"</span>
}
</code></pre>
<p>In this example, the data source block has a type of "aws_instance" and a name of "example-server". The "instance_id" parameter specifies the ID of the EC2 instance that we want to retrieve information about.</p>
<p>Once you've defined a data source, you can reference it in your Terraform code using the syntax "data.&lt;TYPE&gt;.&lt;NAME&gt;.&lt;ATTRIBUTE&gt;". For example, to reference the public IP address of the EC2 instance in our previous example, we could use the following syntax:</p>
<pre><code class="lang-kotlin">resource <span class="hljs-string">"aws_security_group_rule"</span> <span class="hljs-string">"example"</span> {
  type        = <span class="hljs-string">"ingress"</span>
  from_port   = <span class="hljs-number">80</span>
  to_port     = <span class="hljs-number">80</span>
  protocol    = <span class="hljs-string">"tcp"</span>
  cidr_blocks = [<span class="hljs-keyword">data</span>.aws_instance.example-server.public_ip]
}
</code></pre>
<p>In this example, we're creating an AWS security group rule that allows traffic on port 80 from the public IP address of the EC2 instance we retrieved using the data source.</p>
<h3 id="heading-conclusion">Conclusion:</h3>
<p>Data sources are a powerful feature of Terraform that allow you to retrieve information about existing resources and use that information in your configuration. With data sources, you can create a more dynamic and flexible infrastructure that can adapt to changes in your environment. By understanding how data sources work and how to use them effectively, you can take full advantage of Terraform's capabilities and create an infrastructure that is easier to manage and maintain over time.</p>
]]></content:encoded></item><item><title><![CDATA[Observability in Kubernetes: Understanding Liveness Probes with Examples]]></title><description><![CDATA[Kubernetes is a highly effective and widely-used platform for container orchestration. One of the key features of Kubernetes is the ability to monitor the health of applications running in containers. Observability in Kubernetes refers to the ability...]]></description><link>https://blog.yusadolat.me/observability-in-kubernetes-understanding-liveness-probes-with-examples</link><guid isPermaLink="true">https://blog.yusadolat.me/observability-in-kubernetes-understanding-liveness-probes-with-examples</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[observability]]></category><dc:creator><![CDATA[Yusuf Adeyemo]]></dc:creator><pubDate>Fri, 10 Feb 2023 11:35:48 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1676028872652/3c6f6e6b-2180-4ead-b3ed-887b4265475b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Kubernetes is a highly effective and widely-used platform for container orchestration. One of the key features of Kubernetes is the ability to monitor the health of applications running in containers. Observability in Kubernetes refers to the ability to monitor and diagnose the behavior of a deployed application.</p>
<p>In this article, we will focus on one aspect of observability in Kubernetes – Liveness Probes. Liveness probes are used to check if an application is running correctly. They are an important tool for making sure that applications running in containers are reliable and always available.</p>
<h3 id="heading-what-are-liveness-probes">What are Liveness Probes?</h3>
<p>Liveness Probes are a type of Kubernetes mechanism that allows you to monitor the health of a container running in a pod. The purpose of Liveness Probes is to detect when a container is no longer running as expected, and if necessary, restart it. This helps to ensure that the containers in a pod are always running and can respond to requests.</p>
<p>Liveness Probes are defined in a pod specification as a JSON or YAML file, and can be specified as either a HTTP request, a TCP socket connection, or a command executed inside the container. The Kubernetes control plane periodically performs the specified Liveness Probe, and if it fails, the control plane will restart the container.</p>
<h3 id="heading-how-do-liveness-probes-work">How do Liveness Probes Work?</h3>
<p>Liveness Probes work by sending requests to a container running in a pod to check its health status. The request can be either an HTTP request, a TCP socket connection, or a command executed inside the container. If the container is healthy, it will return a success status code. If the container is not healthy, it will return a failure status code, and the Kubernetes control plane will restart the container.</p>
<h3 id="heading-examples-of-using-liveness-probes">Examples of Using Liveness Probes</h3>
<p>There are several types of liveness probes that can be used, including:</p>
<ul>
<li><p>HTTP requests</p>
</li>
<li><p>TCP connections</p>
</li>
<li><p>Command execution</p>
</li>
</ul>
<p>Each type of probe has its own use case, and can be defined in the pod definition file.</p>
<h3 id="heading-http-requests">HTTP Requests</h3>
<p>HTTP requests are the most common type of liveness probe. They can be used to check if an application is responding to requests, and to ensure that the application is accessible.</p>
<p>To define an HTTP liveness probe in a pod definition file, you would add the following code:  </p>
<pre><code class="lang-yaml"><span class="hljs-attr">livenessProbe:</span>
  <span class="hljs-attr">httpGet:</span>
    <span class="hljs-attr">path:</span> <span class="hljs-string">/ping</span>
    <span class="hljs-attr">port:</span> <span class="hljs-number">3000</span>
  <span class="hljs-attr">initialDelaySeconds:</span> <span class="hljs-number">5</span>
  <span class="hljs-attr">timeoutSeconds:</span> <span class="hljs-number">1</span>
  <span class="hljs-attr">periodSeconds:</span> <span class="hljs-number">10</span>
  <span class="hljs-attr">successThreshold:</span> <span class="hljs-number">1</span>
  <span class="hljs-attr">failureThreshold:</span> <span class="hljs-number">3</span>
</code></pre>
<p>In this example, the liveness probe will send an HTTP GET request to <code>/ping</code> on port 3000. The probe will be initiated after a delay of 5 seconds, and will time out after 1 second. The probe will run every 10 seconds and will be considered successful if it returns a response. If the probe fails 3 times in a row, the container will be restarted.</p>
<h3 id="heading-tcp-connections">TCP Connections</h3>
<p>TCP connections can be used to check if an application is listening on a specific port. They can be used to check if an application is running and to make sure the application can be reached.</p>
<p>To define a TCP liveness probe in a pod definition file, you would add the following code:  </p>
<pre><code class="lang-yaml"><span class="hljs-attr">livenessProbe:</span>
  <span class="hljs-attr">tcpSocket:</span>
    <span class="hljs-attr">port:</span> <span class="hljs-number">3000</span>
  <span class="hljs-attr">initialDelaySeconds:</span> <span class="hljs-number">5</span>
  <span class="hljs-attr">timeoutSeconds:</span> <span class="hljs-number">1</span>
  <span class="hljs-attr">periodSeconds:</span> <span class="hljs-number">10</span>
  <span class="hljs-attr">successThreshold:</span> <span class="hljs-number">1</span>
  <span class="hljs-attr">failureThreshold:</span> <span class="hljs-number">3</span>
</code></pre>
<p>In this example, the liveness probe will attempt to establish a TCP connection on port 3000. The probe will be initiated after a delay of 5 seconds, and will time out after 1 second. The probe will run every 10 seconds and will be considered successful if it is able to establish a connection. If the probe fails 3 times in a row, the container will be restarted.</p>
<h3 id="heading-command-execution">Command Execution</h3>
<p>Command execution can be used to check if an application is running properly by executing a command within the container. This is useful for checking the status of an application, and can be used to ensure that the application is running correctly.  </p>
<pre><code class="lang-yaml"><span class="hljs-attr">livenessProbe:</span>
      <span class="hljs-attr">exec:</span>
        <span class="hljs-attr">command:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">cat</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">/tmp/healthy</span>
      <span class="hljs-attr">initialDelaySeconds:</span> <span class="hljs-number">5</span>
      <span class="hljs-attr">periodSeconds:</span> <span class="hljs-number">5</span>
</code></pre>
<p>In this example, the liveness probe executes the command <code>cat /tmp/healthy</code> in the target container. If the command succeeds, it returns 0, and the probes consider the container to be alive and healthy. If the command returns a non-zero value, it kills the container and restarts it.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>Liveness probes are a key component of observability in Kubernetes, providing a mechanism to monitor the health of containers and ensure they are running as expected. By using Liveness Probes, you can make your Kubernetes cluster applications more reliable and available. Whether you choose to use HTTP Liveness Probes, TCP Socket Connection Liveness Probes, or Command Execution Liveness Probes, you can rest assured that your containers are being monitored and will be restarted if necessary to ensure they are always running and available to respond to requests.</p>
]]></content:encoded></item><item><title><![CDATA[Observability in Kubernetes: Understanding Readiness Probes with Examples]]></title><description><![CDATA[Kubernetes is an open-source platform that automates the deployment, scaling, and management of containerized applications. In a production environment, it's crucial to ensure that your applications are running smoothly and are available to handle in...]]></description><link>https://blog.yusadolat.me/observability-in-kubernetes-understanding-readiness-probes-with-examples</link><guid isPermaLink="true">https://blog.yusadolat.me/observability-in-kubernetes-understanding-readiness-probes-with-examples</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[observability]]></category><category><![CDATA[pod]]></category><dc:creator><![CDATA[Yusuf Adeyemo]]></dc:creator><pubDate>Tue, 31 Jan 2023 12:30:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1675143983842/74504f45-82fc-4896-959e-5f72577b5c32.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Kubernetes is an open-source platform that automates the deployment, scaling, and management of containerized applications. In a production environment, it's crucial to ensure that your applications are running smoothly and are available to handle incoming requests. That's where the concept of observability comes into play.</p>
<p>Readiness probes are one of the key components of observability in Kubernetes, providing valuable information about the health and availability of your applications. They help to solve the problem of ensuring that only healthy containers receive traffic, and provide several advantages for your applications and infrastructure.</p>
<h3 id="heading-the-problem-is-my-container-ready-to-start-accepting-traffic">The Problem: Is my <strong>container ready to start accepting traffic?</strong></h3>
<p>In a large-scale production environment, it's critical to ensure that containers are healthy and ready to handle incoming requests. Without the proper tools and processes in place, it's difficult to determine the health of individual containers, leading to potential downtime and a negative impact on your users.</p>
<h3 id="heading-the-solution-readiness-probes">The Solution: Readiness Probes</h3>
<p>Readiness probes in Kubernetes provide a solution to the problem of determining container health. They allow you to monitor the health of your containers in real time and determine if they're ready to handle incoming requests. Readiness probes are executed at regular intervals to ensure that containers are healthy and available, and if they fail, the Kubernetes scheduler will automatically stop sending traffic to that container.</p>
<h3 id="heading-what-you-get-improved-reliability-and-performance">What you get: Improved Reliability and Performance</h3>
<p>By implementing readiness probes in your Kubernetes infrastructure, you can enjoy several advantages, including improved reliability and performance. With the ability to monitor the health of your containers in real-time, you can quickly identify and resolve any issues that may arise, reducing downtime and improving the overall availability of your applications. Additionally, by only sending traffic to healthy containers, you can improve the performance of your applications and provide a better experience for your users.</p>
<h3 id="heading-setting-up-readiness-probes-in-kubernetes">Setting up Readiness Probes in Kubernetes</h3>
<p>To set up readiness probes in Kubernetes, you'll need to define a readiness probe as part of your container spec in the deployment configuration file. There are three types of probes you can use: HTTP, TCP, and Command.</p>
<p>Here's an example of how to set up an HTTP readiness probe in a Kubernetes deployment file:  </p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">my-app-deployment</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">3</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">my-app</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">my-app</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">my-app-container</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">my-app-image</span>
        <span class="hljs-attr">ports:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">3000</span>
        <span class="hljs-attr">readinessProbe:</span>
          <span class="hljs-attr">httpGet:</span>
            <span class="hljs-attr">path:</span> <span class="hljs-string">/ping</span>
            <span class="hljs-attr">port:</span> <span class="hljs-number">8080</span>
          <span class="hljs-attr">initialDelaySeconds:</span> <span class="hljs-number">5</span>
          <span class="hljs-attr">periodSeconds:</span> <span class="hljs-number">10</span>
</code></pre>
<p>In this example, the readiness probe is an HTTP GET request to the <code>/ping</code> endpoint on port 3000. The probe will be executed every 10 seconds, with an initial delay of 5 seconds.</p>
<p>Here's an example of how to set up a TCP readiness probe in a Kubernetes deployment file:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">my-app-deployment</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">3</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">my-app</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">my-app</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">my-app-container</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">my-app-image</span>
        <span class="hljs-attr">ports:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">3000</span>
        <span class="hljs-attr">readinessProbe:</span>
          <span class="hljs-attr">tcpSocket:</span>
            <span class="hljs-attr">port:</span> <span class="hljs-number">3000</span>
          <span class="hljs-attr">initialDelaySeconds:</span> <span class="hljs-number">5</span>
          <span class="hljs-attr">periodSeconds:</span> <span class="hljs-number">10</span>
</code></pre>
<p>In this example, the readiness probe is a TCP connection to port 3000. The probe will be executed every 10 seconds, with an initial delay of 5 seconds.</p>
<p>Here's an example of how to set up a Command readiness probe in a Kubernetes deployment file:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">my-app-deployment</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">3</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">my-app</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">my-app</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">my-app-container</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">my-app-image</span>
        <span class="hljs-attr">ports:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">3000</span>
        <span class="hljs-attr">readinessProbe:</span>
          <span class="hljs-attr">exec:</span>
            <span class="hljs-attr">command:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">/bin/sh</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">-c</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">"curl -f http://localhost:3000/ping || exit 1"</span>
          <span class="hljs-attr">initialDelaySeconds:</span> <span class="hljs-number">5</span>
          <span class="hljs-attr">periodSeconds:</span> <span class="hljs-number">10</span>
</code></pre>
<p>In this example, the readiness probe is a shell script that uses <code>curl</code> to make an HTTP request to the <code>/ping</code> endpoint on port 3000. The probe will be executed every 10 seconds, with an initial delay of 5 seconds.</p>
<p>If the <code>curl</code> command returns a non-zero exit code, it means the probe failed and the container is considered not ready.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>Readiness probes are a crucial component of observability in Kubernetes, providing valuable information about the health and availability of your containers. By implementing readiness probes, you can improve the reliability and performance of your applications, ensuring that they are always available to handle incoming requests.</p>
]]></content:encoded></item><item><title><![CDATA[5 tools to supercharge your Terraform Development]]></title><description><![CDATA[Want to take your Terraform development to the next level? Look no further! Introducing the 5 tools to supercharge your Terraform development: Terragrunt, Terratest, Terraform-docs, TFLint, and Infracost. These powerful tools will help you to organiz...]]></description><link>https://blog.yusadolat.me/5-tools-to-supercharge-your-terraform-development</link><guid isPermaLink="true">https://blog.yusadolat.me/5-tools-to-supercharge-your-terraform-development</guid><category><![CDATA[Terraform]]></category><category><![CDATA[tools]]></category><dc:creator><![CDATA[Yusuf Adeyemo]]></dc:creator><pubDate>Thu, 12 Jan 2023 13:57:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1673531433684/8ab86b60-6b73-4eb1-bd1d-4ff0d02e9b79.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Want to take your Terraform development to the next level? Look no further! Introducing the 5 tools to supercharge your Terraform development: Terragrunt, Terratest, Terraform-docs, TFLint, and Infracost. These powerful tools will help you to organize and test your Terraform code, generate documentation, catch errors early, and estimate the monthly cost of running infrastructure. Say goodbye to manual and tedious tasks and hello to efficient and effective infrastructure management with these must-have tools.</p>
<ol>
<li><p><a target="_blank" href="https://terragrunt.gruntwork.io/docs/#getting-started">Terragrunt</a>: This tool is a thin wrapper for Terraform that provides extra features, such as locking for Terraform state and the ability to keep your Terraform configurations DRY. With Terragrunt, you can organize your Terraform code into reusable modules, and reuse those modules across multiple projects. This helps to keep your Terraform code organized and reduces duplicated effort.</p>
</li>
<li><p><a target="_blank" href="https://terratest.gruntwork.io/docs/#getting-started">Terratest</a>: This is a Go library that makes it easier to write automated tests for your Terraform code. Terratest provides utility functions for interacting with Terraform, as well as libraries for common infrastructure-as-code tools, such as AWS, GCP, and Kubernetes. With Terratest, you can write tests that check that your Terraform code creates the resources it should, that the resources are configured correctly, and that they can be destroyed.</p>
</li>
<li><p><a target="_blank" href="https://terraform-docs.io/">Terraform-docs</a>: This tool generates documentation for your Terraform modules in various formats, such as Markdown, HTML, and JSON. Terraform-docs parses your Terraform code and extracts documentation from comments, variable and output descriptions, and input/output examples. The tool then generates a table of contents with links to the relevant documentation for each module, which makes it easy to understand the purpose and usage of each module.</p>
</li>
<li><p><a target="_blank" href="https://github.com/terraform-linters/tflint">TFLint</a>: This is a Terraform linter that checks for errors and best practices in your Terraform code. TFLint helps to catch common mistakes, such as variable name clashes, missing required variables, or invalid resource arguments. It also checks for compliance with best practices, such as naming conventions and resource ordering. By using TFLint, you can catch errors early on, which helps to improve the quality of your Terraform code.</p>
</li>
<li><p><a target="_blank" href="https://www.infracost.io/">Infracost</a> : Infracost is an open-source tool that allows users to see the cost of running their infrastructure, such as AWS resources, in near real-time. It uses the AWS Price List API to determine the costs of resources, and can be integrated into CI/CD pipelines to provide cost feedback during the development process. This allows developers to make informed decisions about their infrastructure and optimize costs. Additionally, Infracost can be used to create alerts based on cost thresholds, so you can be notified when your infrastructure costs exceed a certain amount. This can be especially useful for teams that operate on a tight budget or need to manage costs closely.</p>
</li>
</ol>
<p>In conclusion, the five tools discussed in this article can greatly enhance the development experience when working with Terraform. By incorporating these tools into your development process, you can increase productivity, and improve the reliability and maintainability of your infrastructure code. With these tools, you can supercharge your Terraform development and take your infrastructure as code to the next level.</p>
]]></content:encoded></item><item><title><![CDATA[Auto Vacuum Explained: Postgres Internals]]></title><description><![CDATA[Postgres auto vacuum is an automated maintenance process that helps keep a Postgres database running smoothly and efficiently. It is designed to remove unnecessary or outdated data, known as "dead tuples," from database tables. This helps to prevent ...]]></description><link>https://blog.yusadolat.me/auto-vacuum-explained-postgres-internals</link><guid isPermaLink="true">https://blog.yusadolat.me/auto-vacuum-explained-postgres-internals</guid><category><![CDATA[PostgreSQL]]></category><category><![CDATA[Databases]]></category><dc:creator><![CDATA[Yusuf Adeyemo]]></dc:creator><pubDate>Thu, 05 Jan 2023 10:24:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1672914116179/8b837b3e-ee28-457d-802f-fd4b6e167603.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Postgres auto vacuum is an automated maintenance process that helps keep a Postgres database running smoothly and efficiently. It is designed to remove unnecessary or outdated data, known as "dead tuples," from database tables. This helps to prevent bloat, which can slow down database performance and cause issues such as increased disk space usage and longer query times.</p>
<p>There are two types of auto vacuum in Postgres: auto vacuum and auto analyze. Auto vacuum is responsible for removing dead tuples and keeping the database clean, while auto analyze gathers statistics about the database to help the planner make better query execution plans.</p>
<h3 id="heading-how-data-is-deleted-in-postgres-data">How Data is Deleted in Postgres data</h3>
<p>To understand why auto vacuum is needed, it's important to understand how data is deleted in a Postgres database. When a row is deleted from a table, the space it occupied is not immediately reused. Instead, the row is marked as deleted and left in place. This is known as a "dead tuple." Over time, as more and more rows are deleted and marked as dead tuples, the table can become cluttered with unnecessary data that is taking up space and slowing down performance.</p>
<p>Auto vacuum is designed to identify and remove these dead tuples on a regular basis, helping to keep the database clean and efficient. It is typically run automatically by the database system, but it can also be triggered manually by a database administrator.</p>
<p><strong>Here is an example of how auto vacuum works in a Postgres database:</strong></p>
<p>Imagine you have a database table called "customers" that stores information about your company's customers. One day, you decide to delete a customer from the table because they have moved away. Instead of actually deleting the row from the table, Postgres marks the row as a dead tuple and leaves it in place.</p>
<p>Over time, as more and more rows are deleted and marked as dead tuples, the "customers" table may become cluttered with unnecessary data. This can cause issues such as increased disk space usage and slower query times.</p>
<p>To solve this problem, Postgres runs an auto vacuum process on the "customers" table. The process scans the table and identifies any dead tuples that need to be removed. It then removes these dead tuples, freeing up space and improving the performance of the table.</p>
<p>In conclusion, Postgres auto vacuum is an important maintenance process that helps keep a database running smoothly and efficiently. It helps to prevent bloat by removing unnecessary or outdated data, improving performance, and reducing disk space usage. While it is typically run automatically by the database system, it can also be triggered manually by a database administrator as needed.</p>
]]></content:encoded></item></channel></rss>