<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Code With Adam]]></title><description><![CDATA[Views from a seasoned Solution Architect and Principal Developer who's worked with Microsoft to develop a Super Computer, Schnieder Electric to develop their Wiser Heat platform, Created the new Gas and Electricity switching framework in the UK and more]]></description><link>https://www.codewithadam.com</link><generator>Substack</generator><lastBuildDate>Wed, 06 May 2026 11:00:02 GMT</lastBuildDate><atom:link href="https://www.codewithadam.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Adam White]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[codewithadam@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[codewithadam@substack.com]]></itunes:email><itunes:name><![CDATA[Adam White]]></itunes:name></itunes:owner><itunes:author><![CDATA[Adam White]]></itunes:author><googleplay:owner><![CDATA[codewithadam@substack.com]]></googleplay:owner><googleplay:email><![CDATA[codewithadam@substack.com]]></googleplay:email><googleplay:author><![CDATA[Adam White]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Increase Productivity with custom google chrome search engines]]></title><description><![CDATA[As a programmer, I spend a lot of my time searching through web-based software such as JIRA, Bitbucket, and Github.]]></description><link>https://www.codewithadam.com/p/increase-productivity-with-custom</link><guid isPermaLink="false">https://www.codewithadam.com/p/increase-productivity-with-custom</guid><dc:creator><![CDATA[Adam White]]></dc:creator><pubDate>Fri, 19 Jul 2024 13:23:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!7gmQ!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbec5b551-5cc7-4943-bb9d-e64eb25a0f65_144x144.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As a programmer, I spend a lot of my time searching through web-based software such as JIRA, Bitbucket, and Github.</p><p>On a day-to-day basis, I&#8217;ll be looking up tickets by their reference numbers. Why? Why can&#8217;t I just get these tickets off of the current sprint board? Well, I don&#8217;t know about you, but I&#8217;ll be in the middle of a ticket, and a tester, business analyst, or product owner will drop me a message asking about XYZ-999.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.codewithadam.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>What on earth was XYZ-999? I&#8217;ve managed to save a large amount of time by introducing custom search engines into google for simple tasks as quickly searching for a ticket reference.</p><p>Or even just looking up active pull requests or trying to find some documentation on confluence.</p><p>In this short article, I&#8217;ll share how I go about doing this so that you can save yourself some time, sure it&#8217;s 20-30 seconds here or there, but that adds up over a year.</p><h2><strong>Adding a custom search engine to Chrome</strong></h2><p>Open Chrome and right-click the address bar, and click <code>Manage search engines...</code></p><p>Click the add button. Under the section <code>other search engines</code>, a small popup will show with 3 form fields to fill in.</p><pre><code><code>Search Engine:
Keyword:
URL with %s in place of query</code></code></pre><p>Fill this in with the info about your search engine. For example, confluence my be something like:</p><pre><code><code>Search Engine: Wiki
Keyword: wiki
URL: https://&lt;company&gt;.atlassian.net/wiki/search?text=%s</code></code></pre><h2><strong>How to use a custom search engine</strong></h2><p>Once you&#8217;ve added a custom search engine, you can use it by clicking on the address bar and using the keyword, followed by a space, then the text you want to search for, then press enter.</p><p><code>wiki which badger danced the best duck?&lt;enter&gt;</code></p><h2><strong>Examples from my uses</strong></h2><pre><code><code>Search Engine: Jira
Keyword: jira
URL: https://&lt;company&gt;.atlassian.net/browse/ref-%s</code></code></pre><pre><code><code>Search Engine: BBPRs
Keyword: bbpr
URL: https://bitbucket.org/&lt;company&gt;/&lt;repo&gt;/pullrequests/%s</code></code></pre><pre><code><code>Search Engine: Gihub_ProjectPRs
Keyword: ghpr
https://github.com/&lt;company&gt;/&lt;project&gt;/pulls</code></code></pre><pre><code><code>Search Engine: Gihub_AllPRs
Keyword: ghapr
https://github.com/pulls</code></code></pre><h2><strong>Wrap Up</strong></h2><p>This has been a very short post, but it is what it says on the tin, a very simple post showing you an easy tip to make searching GitHub, Jira, etc., much easier.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.codewithadam.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adam&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[How to configure Terraform logging]]></title><description><![CDATA[How to configure logging for Terraform]]></description><link>https://www.codewithadam.com/p/how-to-configure-terraform-logging</link><guid isPermaLink="false">https://www.codewithadam.com/p/how-to-configure-terraform-logging</guid><dc:creator><![CDATA[Adam White]]></dc:creator><pubDate>Fri, 19 Jul 2024 13:21:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!7gmQ!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbec5b551-5cc7-4943-bb9d-e64eb25a0f65_144x144.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>How to configure logging for Terraform</strong></h2><p>Sometimes things just go wrong, and the default error messages coming out is less than helpful.</p><p>This was the case for me this morning, I still don&#8217;t know why this happened, but when running <code>terraform init</code>, I would get the following error:</p><pre><code><code>Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/azurerm versions matching "2.9.0"...
- Installing hashicorp/azurerm v2.9.0...

Error: Failed to install provider

Error while installing hashicorp/azurerm v2.9.0: open
.terraform\plugins\registry.terraform.io\hashicorp\azurerm\2.9.0\windows_amd64\terraform-provider-azurerm_v2.9.0_x5.exe:
The system cannot find the path specified.</code></code></pre><p>While this is a good error, it wasn&#8217;t enough to work out what the problem is. Fortunately, <a href="https://www.terraform.io/">Terraform</a> offers verbose logging, which gives way more information making it easier to debug through the issue.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.codewithadam.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p>You can find out more about Terraforms <a href="https://www.terraform.io/docs/internals/debugging.html">logging</a> info in the <em>Debugging Terraform</em> section of the documentation. I highly suggest enabling verbose logging locally. The information is written out is much more helpful when running <em>init, plan</em> or <em>apply</em>.</p><p>It wasn&#8217;t immediately apparent how I would enable more verbose logging, so below are instructions for enabling this on both Windows and Linux. I set mine to <strong>TRACE</strong>, but you can also set the following log levels <strong>DEBUG, INFO, WARN</strong>, or <strong>ERROR</strong>. I went straight for the most verbose setting, as I wanted to dig through exactly what had gone on in hopes of finding out what was going wrong.</p><h2><strong>Configuring Terraform logging</strong></h2><p>To enable the different levels of logging, Terraform requires you to configure two environment variables, these are <code>TF_LOG</code> and <code>TF_LOG_PATH</code>. You need to configure both of these. Otherwise, you won&#8217;t get any additional logs. I chose to call my log file terraform.log, but you can name it whatever you like. You can also output the file where ever you want.</p><h3><strong>Enabling in your current session</strong></h3><p>If you only want to enable this level of logging temporarily to work through a problem, then you can configure these settings just for the session you are working in. Here&#8217;s how to enable these both in PowerShell and Bash; the next time you run a <strong>terraform</strong> command, your log file will be created and will contain the verbose logging.</p><h3><strong>PowerShell</strong></h3><pre><code><code>&gt; $env:TF_LOG="TRACE"
&gt; $env:TF_LOG_PATH="./logs/terraform.log"</code></code></pre><h3><strong>Bash</strong></h3><pre><code><code>$ export TF_LOG="TRACE"
$ export TF_LOG_PATH="./logs/terraform.log"</code></code></pre><p>This works great when you just need these detailed logs for a single session. There are times when I&#8217;ve run Terraform within <a href="https://code.visualstudio.com/">VS Code</a>. The output to the console is so large that it overwrites the terminal buffer, preventing my ability to scroll back far enough to capture all the info I want. It&#8217;s for this reason that I&#8217;ve set this up as a permanent option, though I do suggest you add the log file to your .gitignore file.</p><h3><strong>Setting up verbose logging permanently for your profile</strong></h3><p>I&#8217;ve become a big fan of this setting, as it means my logs are persisted. There&#8217;s no need to rerun a command that failed with a deeper logging level to determine what went wrong as the error is already in my logs.</p><h4><strong>PowerShell Profile</strong></h4><p>To set this up as a permanent option in your PowerShell profile, you&#8217;ll need to open your Powershell profile. This is simple enough to do with the <strong>$profile</strong> command in a PowerShell console. Just simply type $profile, and it will output the location of your profile file.</p><p>Open that file up and add the following two lines (you can change the name and location to suit you):</p><pre><code><code># Terraform log settings
$env:TF_LOG="TRACE"
$env:TF_LOG_PATH="./logs/terraform.log"</code></code></pre><p>Close and reopen PowerShell and type the following to verify that the change has worked:</p><pre><code><code>&gt; echo $env:TF_LOG
TRACE
&gt; echo $env:TF_LOG_PATH
/logs/terraform.log</code></code></pre><h4><strong>Bash Profile</strong></h4><p>This is almost exactly the same as the PowerShell method, except the file name and location is different. Open your .bashrc file, which you can find located in your $home directory, then add the following lines:</p><pre><code><code># Terraform log settings
export TF_LOG=TRACE
export TF_LOG_PATH="./logs/terraform.logs"s</code></code></pre><p>Close your bash console and reopen, and type the following to confirm the change has worked correctly.</p><pre><code><code>$ echo $TF_LOG
TRACE
$ echo $TF_LOG_PATH
/logs/terraform.log</code></code></pre><h2><strong>Wrap Up</strong></h2><p>That&#8217;s it. That&#8217;s all you need to do to get some really useful logs out of Terraform. Thanks to the verbose logging, I was able to find out <em>what</em> had gone wrong, though why I still don&#8217;t know.</p><p>The azurerm provider exe wouldn&#8217;t populate in the plugins folder. The folder&#8217;s paths would, just not the exe. The logs showed that the file was accessible over TCP but still nothing.</p><p>In the end, I just downloaded the file manually and placed it in the correct location. While that&#8217;s not the most ideal solution, it was enough to allow me to crack on with the rest of my day.</p><p>Thanks for reading. I hope this has been some use.</p>]]></content:encoded></item><item><title><![CDATA[How to manage multiple GitHub accounts on a single machine]]></title><description><![CDATA[There comes a time when you may need to manage multiple GitHub accounts on the same machine.]]></description><link>https://www.codewithadam.com/p/how-to-manage-multiple-github-accounts</link><guid isPermaLink="false">https://www.codewithadam.com/p/how-to-manage-multiple-github-accounts</guid><dc:creator><![CDATA[Adam White]]></dc:creator><pubDate>Fri, 19 Jul 2024 13:20:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!UNeC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc977b90a-b984-4c86-8c3e-75931cb45d23_1893x801.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There comes a time when you may need to manage multiple GitHub accounts on the same machine. For me, it was when I a client I was providing services to moved from bitbucket over to GitHub. I suddenly needed to access my own GitHub as well as theirs from my dev box.</p><p>I hadn&#8217;t done this before, but surely it would be easy? Right? Well, it&#8217;s not difficult, but it&#8217;s also not as straight forward as you might expect, even more so if you work primarily on a windows machine.</p><p>The standard way to enable multiple GitHub logins on a single machine is to generate multiple SSH keys and alias the repo&#8217;s URL. First things first.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.codewithadam.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><h2><strong>Generating the SSH Keys</strong></h2><p>Before you attempt to generate an SSH key, you should check for any existing SSH keys on the machine. If you are on a Mac/Linux, this can be achieved by running <code>ls -al ~/.ssh</code>; this command will list out of the existing and private key pairs if any exist.</p><p>If you are on windows like myself, then the keys exist in:</p><pre><code><code>C:\Users\&lt;username&gt;\.ssh\</code></code></pre><p>If the default key pair exists <code>~/.ssh/id_rsa</code>, this can be used. Otherwise, we can generate a default by running:</p><pre><code><code>ssh-keygen -t -rsa -C "your email address."</code></code></pre><ul><li><p>-t stands for &#8216;type&#8217;</p></li><li><p>-C is for comment</p></li></ul><p>You can run the command without passing in a comment; when you run the following, you&#8217;ll see the following: * if you are on windows and see an error, scroll down *</p><pre><code><code>Generating public/private rsa key pair.
Enter file in which to save the key (/c/Users/&lt;your_username&gt;/.ssh/id_rsa):</code></code></pre><p>Copy and paste the path <code>/c/Users/&lt;your_username&gt;/.ssh/</code> and add a unique file name i.e. <code>/c/Users/&lt;your_username&gt;/.ssh/id_rsa_githubPersonal</code></p><p><strong>Make sure that you do not override the existing id_rsa, as this is your existing key, which you may have set up for GitHub or some other ssh connection</strong></p><p>you will then be prompted for a passphrase</p><pre><code><code>Enter passphrase (empty for no passphrase):
Enter same passphrase again:</code></code></pre><p>You can enter a passphrase, or you can press &#8216;ENTER&#8217; twice to leave it blank.</p><p>you can check the folder <code>c:/Users/&lt;your_username&gt;/.ssh/</code> on windows or for mac/linux run <code>ls -al ~/.ssh</code></p><ul><li><p>ls list all the files in the current directory</p></li><li><p>a is for listing all of the files, including the hidden ones</p></li><li><p>l is for a listing in a long format</p></li></ul><p>You should get an output somewhat similar to this:</p><pre><code><code>total 46
drwxr-xr-x 1 your_username 1049089    0 Jan 26 10:40 .
drwxr-xr-x 1 your_username 1049089    0 Jan 23 12:12 ..
-rw-r--r-- 1 your_username 1049089 3309 Nov 30 11:21 id_rsa
-rw-r--r-- 1 your_username 1049089  547 Nov 30 11:21 id_rsa.pub
-rw-r--r-- 1 your_username 1049089 2675 Jan 26 10:40 id_rsa_githubPersonal
-rw-r--r-- 1 your_username 1049089  254 Jan 26 10:40 id_rsa_githubPersonal.pub
-rw-r--r-- 1 your_username 1049089  399 Nov 30 12:08 known_hosts</code></code></pre><p>Either via the console or from the folder view, you should be able to see the new SSH key file that you&#8217;ve just created. as you&#8217;ll see above, I have two files, <code>id_rsa_githubPersonal</code> and <code>id_rsa_githubPersonal.pub</code></p><ul><li><p><code>id_rsa_githubPersonal</code> this is your private key, which is stored on your machine</p></li><li><p><code>id_rsa_githubPersonal.pub</code> this is your public key. This is the one we&#8217;ll give to GitHub</p></li></ul><p>Before we move on, here&#8217;s how to fix this error when running this on windows.</p><h2><strong>&#8216;ssh-keygen&#8217; is not recognized as an internal or external command</strong></h2><p>So I ran ssh-keygen and got &#8220;&#8216;ssh-keygen&#8217; is not recognized as an internal or external command&#8221;, I had git for windows installed, my PATH variable was set properly. (make sure your PATH variable has C:\Program Files\Git\cmd)</p><p>Yet, still, the command would work.</p><p>For the command to work, you need to have ssh-agent started. I had a quick looking in services.msc and couldn&#8217;t see any service named ssh-agent or anything similar, now I was being a doughnut. It&#8217;s not uncommon. I fell back to Powershell: <code>Start-Service ssh-agent</code></p><p><em>If you don&#8217;t want to be a doughnut like I was when looking in services.msc, look for </em><code>OpenSSH Authentication Agent</code></p><p>That, however, resulted in:</p><pre><code><code>Unable to start ssh-agent, error :1058</code></code></pre><p>This error is because while ssh-agent is installed, the service isn&#8217;t started. To prove this, you can run the following:</p><pre><code><code> &gt; Get-Service ssh-agent</code></code></pre><p>You should see something like this:</p><pre><code><code>Status   Name               DisplayName
------   ----               -----------
Stopped  ssh-agent          OpenSSH Authentication Agent</code></code></pre><p>Ok, so it&#8217;s stopped. Actually, mine was disabled, which you can see by running:</p><pre><code><code>&gt; Get-Service ssh-agent | Select StartType

StartType
---------
Disabled</code></code></pre><p>To start the service, run the following:</p><pre><code><code>&gt; Get-Service -Name ssh-agent | Set-Service -StartupType Manual
&gt; Start-Service ssh-agent</code></code></pre><p>This can all be done in the GUI; make sure to look for OpenSSH, not SSH Agent like I did.</p><p>Now in a new console window, if you type <code>ssh-keygen</code> it suddenly works!</p><h2><strong>Adding your new SSH key to the GitHub account.</strong></h2><p>Now that we&#8217;ve generated the public SSH key, we need to add it to your GitHub account.</p><p>First things first, we need to copy the public key, on windows open <code>c:/Users/&lt;your_username&gt;/.ssh/id_rsa_githubPersonal</code> with your favorite text editor. I suggest VsCode.</p><p>On Linux/Mac, you could use <code>atom ~/.ssh/id_rsa_githubPersonal</code>; the text editor you use is entirely your choice. When the file is opened, it should look something similar to this:</p><pre><code><code>ssh-rsa AAAAB5hgxC1yc2EAAAADAQABABABAQDEmSbc7ms4TFIf7G0e9EqdrQRTB17VFTqRtCbQ55sSc11xZP5B07UXf9+................a955cf1GUzsNIr60E7VuVxirrr+K2.............nDEg1H/VbyJtEekh4Aav9csQw3r7y test@codewithadam.com</code></code></pre><p>The key is longer than this, but I&#8217;ve shortened it and randomized it a bit for the tutorial.</p><p>To add this to your GitHub account, go to your <code>Github account -&gt; Settings -&gt; SSH and GPG keys</code></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UNeC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc977b90a-b984-4c86-8c3e-75931cb45d23_1893x801.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UNeC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc977b90a-b984-4c86-8c3e-75931cb45d23_1893x801.jpeg 424w, https://substackcdn.com/image/fetch/$s_!UNeC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc977b90a-b984-4c86-8c3e-75931cb45d23_1893x801.jpeg 848w, https://substackcdn.com/image/fetch/$s_!UNeC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc977b90a-b984-4c86-8c3e-75931cb45d23_1893x801.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!UNeC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc977b90a-b984-4c86-8c3e-75931cb45d23_1893x801.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UNeC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc977b90a-b984-4c86-8c3e-75931cb45d23_1893x801.jpeg" width="1456" height="616" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c977b90a-b984-4c86-8c3e-75931cb45d23_1893x801.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:616,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;GitHub SSH Settings&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="GitHub SSH Settings" title="GitHub SSH Settings" srcset="https://substackcdn.com/image/fetch/$s_!UNeC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc977b90a-b984-4c86-8c3e-75931cb45d23_1893x801.jpeg 424w, https://substackcdn.com/image/fetch/$s_!UNeC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc977b90a-b984-4c86-8c3e-75931cb45d23_1893x801.jpeg 848w, https://substackcdn.com/image/fetch/$s_!UNeC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc977b90a-b984-4c86-8c3e-75931cb45d23_1893x801.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!UNeC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc977b90a-b984-4c86-8c3e-75931cb45d23_1893x801.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Click on the <em>New SSH key</em> button.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dGQi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd72174b8-8113-4fac-83d8-5da9c8c65475_778x465.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dGQi!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd72174b8-8113-4fac-83d8-5da9c8c65475_778x465.jpeg 424w, https://substackcdn.com/image/fetch/$s_!dGQi!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd72174b8-8113-4fac-83d8-5da9c8c65475_778x465.jpeg 848w, https://substackcdn.com/image/fetch/$s_!dGQi!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd72174b8-8113-4fac-83d8-5da9c8c65475_778x465.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!dGQi!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd72174b8-8113-4fac-83d8-5da9c8c65475_778x465.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dGQi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd72174b8-8113-4fac-83d8-5da9c8c65475_778x465.jpeg" width="778" height="465" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d72174b8-8113-4fac-83d8-5da9c8c65475_778x465.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:465,&quot;width&quot;:778,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;New Github ssh key&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="New Github ssh key" title="New Github ssh key" srcset="https://substackcdn.com/image/fetch/$s_!dGQi!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd72174b8-8113-4fac-83d8-5da9c8c65475_778x465.jpeg 424w, https://substackcdn.com/image/fetch/$s_!dGQi!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd72174b8-8113-4fac-83d8-5da9c8c65475_778x465.jpeg 848w, https://substackcdn.com/image/fetch/$s_!dGQi!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd72174b8-8113-4fac-83d8-5da9c8c65475_778x465.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!dGQi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd72174b8-8113-4fac-83d8-5da9c8c65475_778x465.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Give your new SSH key a title, and then paste the new SSH key you copied earlier. I suggest naming your keys after the machine they are from and the purpose, i.e., personal, company, etc.</p><h2><strong>Registering the new SSH Key with ssh-agent</strong></h2><p>Now that we&#8217;ve created a key and added it to GitHub, we need to add it to our SSh agent. and this can be done by running:</p><pre><code><code>ssh-add ~/.ssh/id_rsa_githubPersonal</code></code></pre><p>windows:</p><pre><code><code>ssh-add c:/Users/&lt;your_username&gt;/.ssh/id_rsa_githubPersonal</code></code></pre><p>Now we have two choices as to how we want to work with the SSH keys going forward. First, we&#8217;ll cover the SSH configuration file. Secondly, I&#8217;ll show you how to have only one SSH key active in your ssh-agent at any one time.</p><h2><strong>Creating the SSH config file</strong></h2><p>In this file, you can specify different SSH configuration rules for different hosts, depending on the <em>host</em> in use will determine which SSH key to use.</p><p>The SSH config will live at <code>~/.ssh/</code> or <code>c:/Users/&lt;your_username&gt;/.ssh/</code> and is a file called config. To create this file, you can use a few methods such as <code>$ touch config</code>, but I tend to use vs code so <code>$ code config</code></p><p>This is where the magic happens. Update the config file to use your SSH keys like so:</p><pre><code><code># Personal account 
Host github.com
   HostName github.com
   User git
   IdentityFile ~/.ssh/id_rsa_githubPersonal
   
# Client account
Host github.com-clientX
   HostName github.com
   User git
   IdentityFile ~/.ssh/id_rsa_githubClientX</code></code></pre><p><em>clientX</em> is the GitHub user id for the client I am providing services to.</p><p>By using <em>github.com-clientX</em> as a notation, It can be used to differentiate the various Git accounts. However, you can also use &#8220;clientx.github.com&#8221;. I suggest being consistent with whatever notation you use. This becomes very relevant when you clone a repository or update the remote origin URL.</p><p>The configuration above tells the ssh-agent to:</p><ul><li><p>use id_rsa_githubPersonal for any git URL that uses @github.com</p></li><li><p>use id_rsa_githubClientX for any git URL that uses @github.com-clientx</p></li></ul><h1><strong>One SSH Key active in ssh-agent at any one time</strong></h1><p>This approach doesn&#8217;t require an SSH config file. Instead, you&#8217;ve manually ensured that the ssh-agent has only the relevant active ssh key attached for your GIT operations.</p><p><code>ssh-add -l</code> will list all the SSH keys attached to the ssh-agent at that moment in time. You would then remove all but the one you want to use. The easiest way to do this is to remove them and then re-add the key you want to use. You can do this like so:</p><pre><code><code>$ ssh-add -D            //removes all ssh key entries from the ssh-agent
$ ssh-add ~/.ssh/id_rsa_githubPersonal                 // Adds the relevant ssh key</code></code></pre><p>The ssh-agent now only has the key that&#8217;s mapped to my personal GitHub account. When I do a git push to my personal repository, it will use that key.</p><p>If I need to push to my client&#8217;s repository, then I&#8217;ll need to rerun the command but this time specifying the client&#8217;s key.</p><pre><code><code>$ ssh-add -D 
$ ssh-add ~/.ssh/id_rsa_githubClientX</code></code></pre><h2><strong>Setting the git remote URL on the local repositories</strong></h2><p>for the repositories that already exist on your machine, you can update the URL by running the following command:</p><pre><code><code>git remote set-url origin git://&lt;hostname&gt;/&lt;path to repo&gt;.git

i.e.

git remote set-url orgin git://github.com-clientX/ClientX/AllTheCodes.git</code></code></pre><p>Make sure to set the git username and email in each repo, which can be done by going into the repo and running <code>git config user.name</code> and <code>git config user.email</code>.</p><pre><code><code>git config user.name "user x" // updates the git config username
git config user.email "userx@client.com" // updates the git config email</code></code></pre><p>if you&#8217;ve done a <code>git init</code> then to set the remote URL, you can use the following</p><p><code>git remote add origin git@github.com-clientX/ClientX/AllTheCodes.git</code></p><p>Obviously, making the changes to match your SSH config and git URLs.</p><p>Just make sure that the string between the @ and : match the host you specified in your ssh config.</p><p>You can then push your initial commit to your GitHub repo.</p><pre><code><code>git add .
git commit -m "Initial commit"
git push -u origin master</code></code></pre><h2><strong>setting the host while cloning repositories</strong></h2><p>Like the above step, when we clone a repository for the first time, we can change the host to match the ssh key we want to use.</p><p>So hop into GitHub and grab the SSH clone URL; it&#8217;ll look something like this:</p><p><code>git clone git@github.com:personal_account_name/repo_name.git</code></p><p>update it to tie into the ssh-key</p><p><code>git clone git@github.com-clientX:acct_name/repo_name.git</code></p><p>The change I&#8217;ve made here is to update the host to match the name that I&#8217;ve set in the SSH config. The string between @ and : should match the SSH config.</p><h2><strong>Wait&#8230; I can&#8217;t use SSH Keys; GitHub config blocks it.</strong></h2><p>Ah&#8230; Welcome to my world.</p><p>After completing all of the above myself, I went to pull the latest from the client&#8217;s repository to be faced with an error stating that the use of SSH keys was prohibited and would need explicit approval.</p><p>My client had decided that personal access tokens were the way forward and that SSH was a no go. Fortunately, that has worked out well for me.</p><p>My personal GitHub Repos all use SSH and the methods I&#8217;ve detailed above; my current client uses personal access tokens, so I authenticate with GitHub over HTTPS using my email as my username and the personal access token as the password.</p><p>If you go to <code>Your GitHub Account -&gt; Settings -&gt; Developer Settings -&gt; Personal Access tokens</code>, you can generate a new token there.</p><p>When you try to pull/clone on windows, you&#8217;ll get a popup asking for your username and password; enter your email and then the personal access token as the password. This will be stored in your credential manager.</p><p>This is what I&#8217;ve done, and it works very well. One caveat is that while you can add multiple entries within the credential manager (hit windows key and type Credential Manager, or enter it into the search bar), only one of them will work; the other won&#8217;t, so you can have both your personal and client credentials in the credential manager. Though I didn&#8217;t test out whether by modifying the host as we do above for the SSH config, whether that would work&#8230;</p><h3><strong>I get &#8220;remote: Repository not found&#8221; when I try to use the personal access token</strong></h3><p>This happens because you already have a GitHub credential cached in your system. That token doesn&#8217;t have access to the repositories you are trying to access. As above, open Credential Manager and update the credentials.</p><h2><strong>Wrap Up</strong></h2><p>Hopefully, this has helped you get your git all set up to work with multiple accounts. I know that I am happily working away with no problems now. I had hoped to find something like google where I can be logged into multiple email accounts at the same time, being able to open them all independently. Unfortunately, this isn&#8217;t something that GitHub supports. Instead, I ended up creating a new profile in Chrome and using that for the client-specific logins.</p>]]></content:encoded></item><item><title><![CDATA[Isolated environments for feature branching in Azure]]></title><description><![CDATA[Feature branching is pretty standard these days, perhaps you are doing it, or perhaps you are considering it.]]></description><link>https://www.codewithadam.com/p/isolated-environments-for-feature</link><guid isPermaLink="false">https://www.codewithadam.com/p/isolated-environments-for-feature</guid><dc:creator><![CDATA[Adam White]]></dc:creator><pubDate>Fri, 19 Jul 2024 13:19:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!7gmQ!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbec5b551-5cc7-4943-bb9d-e64eb25a0f65_144x144.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Feature branching is pretty standard these days, perhaps you are doing it, or perhaps you are considering it. Unfortunately, there is a downside to feature branching.</p><p>Testing them.</p><p>Typically, there are a finite number of environments that you can use to work on, i.e., Dev, Test, PreProd, etc., etc. Every company I&#8217;ve worked for has its own set of environments and naming conventions for those environments. That&#8217;s a bit irrelevant here. As developers, we have localhost, which doesn&#8217;t quite work the same as it does when deployed. There&#8217;s always a gotcha, i.e., if you are using Azure API management, well, you won&#8217;t be using that locally, and many other sorts of things. So feature branching works great for the most part locally, but when it&#8217;s pushed out to an environment for testing, we come up against an issue.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.codewithadam.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p>Firstly, that environment is now isolated to that feature. Any schema or data changes specific to that feature are now locked in. We can&#8217;t just deploy another feature branch as the data and or schema is out of sync. Also, testing resources&#8230; you may have several testers within your team. Now that the environment has been occupied by a feature, only the tester[s] involved can work on that feature; the others are sat, stuck waiting.</p><p>Sure we can have Dev1, Dev2, etc., or if your sprint team has a name, i.e., Nomad1, Nomad2, corpo4, etc., that gives some flexibility.</p><p>But, what if you could create an environment that is set up for isolated testing of just your work, regardless of data changes? Multiple features in test at any given time, the ability to have a dev/test feedback cycle isolated to just your changes, no undue pressure to get your work done, other than the sprint deadline, not &#8220;oh,  I&#8217;ve got to release this environment so Y can get their work on it&#8221;.</p><p>Having an environment that exists just long enough for the work to be done, tested, and then merged back into the main development branch? Wouldn&#8217;t that save you money not spinning up (x) number of environments which may or may not be utilized, just sat waiting doing nothing, depending entirely on the throughput of the team, i.e., one sprint you may need 4 environments, the next 2. leaving 2 fully-fledged environments costing your company for no reason.</p><p>Well, you can, you can utilize your azure pipelines and your ARM/Terraform scripts to generate an environment isolated to just your changes, not just that, but you can do so for free or &#8220;almost free&#8221;, using the free tiers of Azure&#8217;s products and sharing the ones that cost money, the sharing aspect might not be ideal for every situation, but in those situations having one or two fully-fledged environments available is still a darn sight cheaper than having a range of environments sat about for no reason, personnel resource waiting while they wait for environments to clear up.</p><h2><strong>How it&#8217;s done.</strong></h2><p>I&#8217;ll show you an example of how I&#8217;ve accomplished it and how it worked in the teams. It won&#8217;t be a fully-fledged tutorial with all of the arm templates hanging about. I can&#8217;t see the value in that, your setup will be unique to you, so there will be some work to pull apart a chosen environment and see where you can split this out into isolated per deployment environments.</p><p>I will show you how to trigger a build that identifies the branches to build from.</p><p>Basically, I&#8217;ll show you how to enable this, but this isn&#8217;t a copy/paste exercise; it will require work on your side. I would suggest wrapping all the components into a single resource group for each feature branch.</p><p>i.e. rg-featurebranch-{env}</p><p>This will allow easy removal of these feature branches once they are done with, as we can just delete the entire resource group, which will take care of the removal of the unique resources.</p><h2><strong>Consul</strong></h2><p>In the past, I&#8217;ve done the same using consul and updating the consul config to point to a different deployed service rather than the official release for that environment. As an aside, a neat trick is to use technology such as mountebank. This can mock away entire services, allowing your UI to get responses similar to those it would from a real service. Enabling your front end team to work on the front end while the backend team creates the backend. A really handy feature of this is that with mountebank, you can simulate things going wrong, so the UI can specifically handle errors not easily replicated when the backend is present. I&#8217;ve even used it to simulate intermittent backend failures. You end up building a very robust front end when you utilize technologies such as these.</p><h2><strong>Triggering a feature branch build via PowerShell</strong></h2><p>The majority of the work is going to be handled by your azure pipelines. Still, we need a way to tell azure pipelines to generate a build against a specific branch and to give it an environment name, i.e., the name of your feature branch. For example, if you are using JIRA or any other issue tracking, work tracking system, there is usually some form of a unique identifier for the ticket that you are working on. This is a fantastic identifier to use as it links the agile/issue tracking board item to your environment. You can also have your build server update the ticket with information about the environment.</p><p>Such as the CosmosDB access token for the environment (should you use cosmos).</p><p>To do this, we&#8217;ll need to create a PowerShell script that can be run. This script will need a few pieces of information.</p><ul><li><p>the user&#8217;s email address</p></li><li><p>the users access token for Azure DevOps, which can be generated here: <a href="https://dev.azure.com/%7Borganization%7D/_usersSettings/tokens">https://dev.azure.com/{organization}/_usersSettings/tokens</a></p></li><li><p>the name of the branch the user wishes to build</p></li><li><p>the environment name</p></li><li><p>(optional extra) the issue tracking ticket reference. This is so the build server can update the ticket with the build info; this is optional as you&#8217;ll need to implement the call to your issue tracker.</p></li></ul><p>The PowerShell script will also need to know the project guid reference to trigger a build within; this can be found by calling this URL: <a href="https://dev.azure.com/%7Borganization%7D/_apis/projects?api-version=5.0-preview.3">https://dev.azure.com/{organization}/_apis/projects?api-version=5.0-preview.3</a>. within the JSON response, you&#8217;ll see id and URL; both hold the guid you are looking for.</p><p>Here&#8217;s an example PowerShell script</p><pre><code><code>param(
    [string] $emailAddress,
    [string] $token,
    [string] $branch,
    [ValidateLength(1,5)]
    [string] $environmentName
    [string] $ticketRef,    
)
$ErrorActionPreference = "Stop"
if (!$emailAddress) {
    Write-Error -Message "email address required -emailAddress"
}
if (!$token) {
    Write-Error -Message "Please provide azure devops access token: https://dev.azure.com/{organisation}/_usersSettings/tokens (Token needs Build Read &amp; execute permissions) -token"
}
if (!$branch) {
    Write-Error -Message "Please provide the branch name to build -branch"
}
if (!$environmentName) {
    Write-Error -Message "Please provide an environment name -environmentName"
}

if (!$ticketRef) {
    Write-Output -Message "No ticker Reference Provided. Unable to update ticket with feature branch information"
    $ticketRef = "NA"
}



$environmentName = $environmentName.ToLower()

$body = '
{
    "stagesToSkip": [],
    "resources": {
        "repositories": {
            "self": {
                "refName": "refs/heads/' + $branch + '"
            }
        }
    },
    "templateParameters": {
        "ticketRef": "'+ $ticketRef + '",
        "EnvironmentName": "' + $environmentName + '"
    },
    "variables": {}
}
'
$bodyJson=$body | ConvertFrom-Json
Write-Output $bodyJson
$bodyString=$bodyJson | ConvertTo-Json -Depth 100
Write-Output $bodyString
$user=$emailAddress
$token=$token
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $user,$token)))

$Uri = "https://dev.azure.com/{organisation}/{projectGuid}/_apis/pipelines/{definitionId}/runs?api-version=5.1-preview"  # get project guid from: https://dev.azure.com/{organisation}/_apis/projects?api-version=5.0-preview.3 - Definition ID can be found in the url of the build you are triggering.
$buildresponse = Invoke-RestMethod -Method Post -UseDefaultCredentials -ContentType application/json -Uri $Uri -Body $bodyString -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo)}
Write-Output $buildresponse</code></code></pre><p>So, we&#8217;ve got a PS script that will generate a build via the Azure pipelines rest api. But what should we build?</p><h2><strong>The build</strong></h2><p>This will be unique to you, but what I suggest is that this triggers your standard build template; the easiest solution would be to duplicate your main dev pipeline and update them. Firstly, you won&#8217;t want this to be triggered by anything, as this will be handled via the Powershell script.</p><p>You&#8217;ll want the build to write out some files about this PR build, specifically the environment name, which will be vital for the release process.</p><pre><code><code>param(
    [string] $ParameterKey,
    [string] $ParameterValue,
    [string] $directory
)

Write-Output $ParameterValue.trim() | Out-File "$directory/$ParameterKey.txt"</code></code></pre><pre><code><code> - task: PowerShell@2
    displayName: Write EnvironmentName to a file
    inputs:
      targetType: 'filePath'
      filePath: $(Build.SourcesDirectory)\build\build_pipeline_scripts\write-BuildParameterToFile.ps1
      arguments: -ParameterKey "EnvironmentName" -ParameterValue "${{ parameters.EnvironmentName }}" -directory "$(Build.ArtifactStagingDirectory)/environment-info"
    env: 
      SYSTEM_ACCESSTOKEN: $(system.accesstoken)
  - task: PublishPipelineArtifact@1
    displayName: Package environment-info artifact 
    inputs:
      targetPath: '$(Build.ArtifactStagingDirectory)/environment-info'
      artifact: 'environment-info'</code></code></pre><p>The rest of the build process will most likely mirror your existing pipeline, perhaps with a step here or there missing, such as not running code analysis on the build.</p><p>This can now trigger your feature branch environment release.</p><h2><strong>The release</strong></h2><p>One of the differences we&#8217;ll make to the release is that this will update the variables beforehand to include the feature branch environment name; we saved this in a folder called environment-info and a file named EnvironmentName.txt</p><ul><li><p>Step 1 - deploy the infrastructure (Cosmos, storage accounts, queues, etc.)</p></li><li><p>Step 2 - Deploy code-based things such as function apps</p></li><li><p>Step 3 - Provide access tokens</p></li><li><p>Step 4 - API Management and the like</p></li></ul><h3><strong>Step 1</strong></h3><p>So perhaps you have a storage account name variable in the main pipeline, then I suggest that this updates that variable to include the environment by running a script: <code>Write-Host "##vso[task.setvariable variable=STORAGE_ACCOUNT_NAME]yournamingconvention$env"</code></p><p>Do this for all the infrastructure that doesn&#8217;t need to be shared. i.e., function apps, cosmos, storage, queues, etc.</p><p>It&#8217;s then a case of letting the release happen as it normally would; the only difference here is that the variables have been updated to include the environment, so each deployment will be unique. and asides any updates to the ticket via the ticketing system api</p><h3><strong>Step 2</strong></h3><p>As with step one, the main difference here is that we&#8217;ll update the variables to include the environment name; this is so we know where to deploy the code and also making sure that the software services have unique and identifiable names, i.e., FUNCAPP<em>CACHERESETTER in dev would be FUNCAPP</em>CACHERESETTER_1021 in your feature branch, assuming the reference was 1021.</p><p>Again, this release pipeline step will match your development one after the variables have been updated. asides any updates to the ticket via the ticketing system api</p><h3><strong>Step 3</strong></h3><p>Once the deployment of step 1 &amp; two has been successful, we can use Azure Powershell to get the variables access tokens from storage accounts, cosmos, function apps, etc. This information can then be published on teams, slack, etc. Added to your ticket within the ticketing system.</p><p>As we have an environment now in Azure, all contained in its own resource group, any access tokens can be picked up by going to the individual resource. You may have app insights or log analytics enabled for your feature. As such, you can access those logs within your resource group.</p><p>Within the pipeline, you may have acceptance tests triggered against dev; if so, like we&#8217;ve modified the infrastructure, we can modify these to take in various variables, allowing you to run these acceptance tests against your version of the deployed assets, which should give you a high level of confidence that everything is working as it once was before you merge your changes to the develop or master or whatever branch. It also allows your testers (or yourself?) to add further acceptance tests to cover the feature and test them against your deployed environment.</p><h3><strong>Step 4</strong></h3><p>If you use something like API Management, then you&#8217;ll want a way of calling your APIs for your feature branch, bypassing the development ones. For example, This can be as simple as adding a header such as &#8220;x-featurebranch&#8221;. which can be used to route APIM to your APIs; you may need to also pass headers that have the relevant access token keys for each function.</p><p>The rest of your APIM policy will continue as is. Though you may have things, you need to change to work with your unique environment.</p><h2><strong>Wrap Up</strong></h2><p>When I started to write this post, I thought it would be much longer, but really, this builds on top of your very unique build and release pipelines. This turned into a &#8220;here&#8217;s an idea of how to&#8221; rather than a, here&#8217;s how to do it! I&#8217;m also not suggesting that this approach is the best, just an approach. There may be better ways to do this using Azure Blueprints or something entirely different.</p><p>The core concept that I hope I&#8217;ve shared is that it&#8217;s possible thanks to Azure, AWS, GCP, and the like to deploy infrastructure and code repeatably and uniquely. i,e. I can have a Development environment and an Adams Testing Badger land environment, which look exactly the same but run on their own instances of infrastructure and code. Because of this, we don&#8217;t need to keep environments hanging around, getting dirty, wondering why suddenly something isn&#8217;t working when it was yesterday, turns out someone changed a file.</p><p>No, we can deploy from scratch each time and do apples to apples comparisons, ensuring that tests pass repeatably. It also means that should we want to introduce load testing, we can. We can fire up a clean environment, load it with data, and hammer it. We&#8217;ll get predictable results each time, give or take the various networking and cloud platform niggles.</p>]]></content:encoded></item><item><title><![CDATA[Trigger Azure Functions on Event Hub]]></title><description><![CDATA[Azure Event Hub is one of a suite of products offered within Microsoft Azure.]]></description><link>https://www.codewithadam.com/p/trigger-azure-functions-on-event</link><guid isPermaLink="false">https://www.codewithadam.com/p/trigger-azure-functions-on-event</guid><dc:creator><![CDATA[Adam White]]></dc:creator><pubDate>Fri, 19 Jul 2024 13:19:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!7gmQ!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbec5b551-5cc7-4943-bb9d-e64eb25a0f65_144x144.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a href="https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-about">Azure Event Hub</a> is one of a suite of products offered within Microsoft Azure. This product allows you to create event-based solutions, handling millions of events per second. Ideal for IoT (internet of things) devices and is geared towards big data scenarios, such as banking, manufacturing, and many other scenarios.</p><p>Event Hub decouples the produces of the events from the consumers reading the events. It can store events for up to 7 days. This helps you scale the application to fit with different load patterns that your customers might generate. Perhaps there is more activity during the weekend. Event hub support automatic scaling out and in.</p><p>Each event can be up to 256kb in size(this may have changed since writing, review the Azure documentation for latest limitations).</p><h2><strong>So, what is an Event?</strong></h2><p>An event is simply:</p><ul><li><p>a type of message</p></li><li><p>Only contains information about what happened, not what triggered it. The sender and receiver are not dependant on each other.</p></li><li><p>There are no expectations from the publisher as to how the event will be handled. The consumer will decide how it handles the event.</p></li><li><p>Events Hubs can be easily integrated with both Azure and non-Azure services.</p></li><li><p>You can automatically capture events to Azure blob storage or data lake as events enter the hub.</p></li><li><p>An event can be consumed by multiple consumers from Event Hubs by using Consumer groups</p></li></ul><h2><strong>Components of Event Hubs</strong></h2><ul><li><p><strong>Namespace</strong> - Container for the event hubs</p></li><li><p>** Event produces** - Sends data to event hubs</p></li><li><p><strong>Partitions</strong> - Buckets of messages (1-32 partitions)</p></li><li><p><strong>Consumer groups</strong> - View of an event hub</p></li><li><p><strong>Subscribers</strong> - The applications that read events contained within an event hub</p></li></ul><h2><strong>Create an Event Hub Client</strong></h2><p>Microsoft has amazing documentation and samples, so you can easily follow what&#8217;s shown in this article: <a href="https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-standard-getstarted-send#send-events">Send events to and receive events from Azure Event Hubs - .NET (Azure.Messaging.EventHubs)</a></p><p>The client will look something like this:</p><pre><code><code>using Microsoft.Azure.EventHubs;
using System;
using System.Text;
using System.Threading.Tasks;

namespace AzureEventHubClient
{
    public class Program
    {
        private static EventHubClient eventHubClient;
        private const string EventHubConnectionString = "{Event Hubs connection string}";
        private const string EventHubName = "{Event Hub path/name}";

        public static void Main(string[] args)
        {
            MainAsync(args).GetAwaiter().GetResult();
        }

        private static async Task MainAsync(string[] args)
        {
            // Creates an EventHubsConnectionStringBuilder object from the connection string, and sets the EntityPath.
            // Typically, the connection string should have the entity path in it, but for the sake of this simple scenario
            // we are using the connection string from the namespace.
            var connectionStringBuilder = new EventHubsConnectionStringBuilder(EventHubConnectionString)
            {
                EntityPath = EventHubName
            };

            eventHubClient = EventHubClient.CreateFromConnectionString(connectionStringBuilder.ToString());

            await SendMessagesToEventHub(100);

            await eventHubClient.CloseAsync();

            Console.WriteLine("Press ENTER to exit.");
            Console.ReadLine();
        }

        // Creates an event hub client and sends 100 messages to the event hub.
        private static async Task SendMessagesToEventHub(int numMessagesToSend)
        {
            for (var i = 0; i &lt; numMessagesToSend; i++)
            {
                try
                {
                    var message = $"Message {i}";
                    Console.WriteLine($"Sending message: {message}");
                    await eventHubClient.SendAsync(new EventData(Encoding.UTF8.GetBytes(message)));
                }
                catch (Exception exception)
                {
                    Console.WriteLine($"{DateTime.Now} &gt; Exception: {exception.Message}");
                }

                await Task.Delay(10);
            }

            Console.WriteLine($"{numMessagesToSend} messages sent.");
        }
    }</code></code></pre><h2><strong>Create an Event Hub</strong></h2><p>Rather than walking you through this step by step, I&#8217;m going to defer to Microsoft who&#8217;ve put together this <a href="https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-create">excellent article</a></p><p>To create an event hub, you&#8217;ll need to:</p><ul><li><p>Create a resource group</p></li><li><p>Create an event hub namespace</p></li><li><p>Create an event hub within the new or existing namespace</p></li><li><p>Set your partition count (max 32), and message retention (max 7 days)</p></li></ul><p>Once your event hub has been created, create a new shared access policy, I suggest you set <code>send/list</code> as permissions. Once created, copy the primary connection string. You&#8217;ll need this for deploying and testing.</p><h2><strong>Create an Event Hub .NET Core client</strong></h2><p>Again, Microsoft has amazing documentation and samples, so you can easily follow what&#8217;s shown in this article:</p><p><a href="https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-standard-getstarted-send">Send events to and receive events from Azure Event Hubs - .NET (Azure.Messaging.EventHubs)</a></p><p>You&#8217;ll end up with something that looks like this:</p><h2><strong>Create a function app trigger</strong></h2><p>Just create the default Azure function app trigger template for an event hub. This is shipped with visual studio.</p><pre><code><code>using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.Azure.EventHubs;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;

namespace MyFirstFunctions
{
    public static class EventHubHelloWorld
    {
        [FunctionName("EventHubHelloWorld")]
        public static async Task Run([EventHubTrigger("helloworld", Connection = "AzureEventHubConnectionString",  ConsumerGroup = "$Default")] EventData[] events, ILogger log)
        {
            var exceptions = new List&lt;Exception&gt;();

            foreach (EventData eventData in events)
            {
                try
                {
                    string messageBody = Encoding.UTF8.GetString(eventData.Body.Array, eventData.Body.Offset, eventData.Body.Count);

                    // Replace these two lines with your processing logic.
                    log.LogInformation($"C# Event Hub trigger function processed a message: {messageBody}");
                    await Task.Yield();
                }
                catch (Exception e)
                {
                    // We need to keep processing the rest of the batch - capture this exception and continue.
                    // Also, consider capturing details of the message that failed processing so it can be processed again later.
                    exceptions.Add(e);
                }
            }

            // Once processing of the batch is complete, if any messages in the batch failed processing throw an exception so that there is a record of the failure.

            if (exceptions.Count &gt; 1)
                throw new AggregateException(exceptions);

            if (exceptions.Count == 1)
                throw exceptions.Single();
        }
    }
}</code></code></pre><p>When I went to build, this is had two problems. Firstly the <code>using Microsoft.Azure.EventHubs</code> was showing as missing. I had to go and reference that NuGet package manually.</p><p>Secondly, the EventHubTrigger attribute was shown as unknown&#8230; (.net core), so it wouldn&#8217;t build. After some digging, I found this <a href="https://github.com/Azure/Azure-Functions/issues/652">GitHub issue</a>, which mentions that you&#8217;ll need to reference <code>Microsoft. Azure.Web jobs.extensions.eventhubs</code>. This is echoed in this [GitHub issue] (<a href="https://github.com/Azure/azure-webjobs-sdk/issues/1558">https://github.com/Azure/azure-webjobs-sdk/issues/1558</a>)</p><p>You may also find that when you start your local Azure function emulator that you get an error like <code>method not found microsoft.azure.eventhubs.eventhubclient</code> to fix this you&#8217;ll need to reference a the following nuget package <code>Microsoft.Azure.EventHubs.Processor</code></p><p>Make sure to add the hub connection strings to your <code>local.settings.json</code> file:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zLYT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0096c87b-198a-445c-a364-afe13c5d5ef6_664x230.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zLYT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0096c87b-198a-445c-a364-afe13c5d5ef6_664x230.png 424w, https://substackcdn.com/image/fetch/$s_!zLYT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0096c87b-198a-445c-a364-afe13c5d5ef6_664x230.png 848w, https://substackcdn.com/image/fetch/$s_!zLYT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0096c87b-198a-445c-a364-afe13c5d5ef6_664x230.png 1272w, https://substackcdn.com/image/fetch/$s_!zLYT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0096c87b-198a-445c-a364-afe13c5d5ef6_664x230.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zLYT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0096c87b-198a-445c-a364-afe13c5d5ef6_664x230.png" width="664" height="230" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0096c87b-198a-445c-a364-afe13c5d5ef6_664x230.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:230,&quot;width&quot;:664,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Event Hubs local app settings file&quot;,&quot;title&quot;:&quot;Event Hubs local app settings file&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Event Hubs local app settings file" title="Event Hubs local app settings file" srcset="https://substackcdn.com/image/fetch/$s_!zLYT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0096c87b-198a-445c-a364-afe13c5d5ef6_664x230.png 424w, https://substackcdn.com/image/fetch/$s_!zLYT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0096c87b-198a-445c-a364-afe13c5d5ef6_664x230.png 848w, https://substackcdn.com/image/fetch/$s_!zLYT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0096c87b-198a-445c-a364-afe13c5d5ef6_664x230.png 1272w, https://substackcdn.com/image/fetch/$s_!zLYT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0096c87b-198a-445c-a364-afe13c5d5ef6_664x230.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>When you deploy for the first time, you may see an error like this: <code>Microsoft.Azure.WebJobs.Host: Error indexing method (..) Microsoft.Azure.WebJobs.EventHubs: Value cannot be null. Parameter name: receiverConnectionString</code></p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!23BB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79a1053a-11db-43bb-82a5-05bcaa8ae22a_511x114.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!23BB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79a1053a-11db-43bb-82a5-05bcaa8ae22a_511x114.png 424w, https://substackcdn.com/image/fetch/$s_!23BB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79a1053a-11db-43bb-82a5-05bcaa8ae22a_511x114.png 848w, https://substackcdn.com/image/fetch/$s_!23BB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79a1053a-11db-43bb-82a5-05bcaa8ae22a_511x114.png 1272w, https://substackcdn.com/image/fetch/$s_!23BB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79a1053a-11db-43bb-82a5-05bcaa8ae22a_511x114.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!23BB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79a1053a-11db-43bb-82a5-05bcaa8ae22a_511x114.png" width="511" height="114" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/79a1053a-11db-43bb-82a5-05bcaa8ae22a_511x114.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:114,&quot;width&quot;:511,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Event Hubs connection string error&quot;,&quot;title&quot;:&quot;Event Hubs connection string error&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Event Hubs connection string error" title="Event Hubs connection string error" srcset="https://substackcdn.com/image/fetch/$s_!23BB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79a1053a-11db-43bb-82a5-05bcaa8ae22a_511x114.png 424w, https://substackcdn.com/image/fetch/$s_!23BB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79a1053a-11db-43bb-82a5-05bcaa8ae22a_511x114.png 848w, https://substackcdn.com/image/fetch/$s_!23BB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79a1053a-11db-43bb-82a5-05bcaa8ae22a_511x114.png 1272w, https://substackcdn.com/image/fetch/$s_!23BB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79a1053a-11db-43bb-82a5-05bcaa8ae22a_511x114.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>If you do, simply go back to the publish screen and hit <code>Edit Azure App Service Settings</code> Edit the settings and set the connection string to that of your event hub (the one we copied earlier)</p><p>This should now deploy correctly.</p><h2><strong>Wrap Up</strong></h2><p>Azure functions are a fantastic tool for integrating with event hub.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.codewithadam.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Using Terraform Modules from Github in Azure DevOps]]></title><description><![CDATA[When you start working with Terraform, it won&#8217;t be long before you end up writing your own Terraform modules.]]></description><link>https://www.codewithadam.com/p/using-terraform-modules-from-github</link><guid isPermaLink="false">https://www.codewithadam.com/p/using-terraform-modules-from-github</guid><dc:creator><![CDATA[Adam White]]></dc:creator><pubDate>Fri, 19 Jul 2024 13:17:25 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!zPlo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefc0ce1b-50fc-4904-904b-da044fbdf063_1186x528.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When you start working with Terraform, it won&#8217;t be long before you end up writing your own Terraform modules. Modules allow you to package up our terrform code into logical, reusuable units of work, that you can reuse within your team, and you can even share them with others. Which is what Grunt works has done with their impressive collection.</p><p>If you do end up creating your own modules, you should be testing them and version controlling them.</p><p>When using modules within your terraform configurations, you shouldn&#8217;t really be downloading and packaging these manually. Fortunately Terraform has a fantastic feature which allows you to configure your terraform code to pull these modules directly from a git repository.</p><p>I&#8217;ll take you through how to set this up to work with Azure Devops and Github repositories.</p><p>This was a challenge, me and my team had to overcome when we were looking to deploy our Terraform code from our Azure Devops pipelines, hopefully I&#8217;ll save you from some of the pain we went through.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.codewithadam.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><h2><strong>Referencing your custom terraform modules in Git</strong></h2><p>To be able to use common terraform modules directly from a git repo, the first thing you need to do is amend your terraform configurations to call out for those modules from a git repo instead of a local folder. This change is rather single, it&#8217;s just a case of changing the source field to reference a git url rather than a folder path, cool eh?</p><h2><strong>SSH or HTTPS?</strong></h2><p>If you are accessing an unsecured repo, then using HTTPS is most likely be the easiest solution. You can grab the url from github, but it&#8217;ll look something like <code>https://github.com/codewithadam/terraform-common-modules.git</code></p><p>For repositories that are secured, whether in private Github repos, Azure Repos, or some other flavor, using an SSH key is often the easiest solution, than trying to work with credentials over HTTP. Even more so if you are going to be running your terraform code on a non-windows machine such as a linux build agent.</p><p>for SSH the url will looking something like: <code>git@github.com:codewithadam/terraform-common-modules</code></p><p>Do note that with either method, both have the username/organisation specified in the urls.</p><h2><strong>Changes to make to your Terraform files</strong></h2><p>So you have your git url, now, as I mentioned above, you need to update your terraform files to look at git rather than your folder path.</p><pre><code><code>module "az-function-app" {
  source       = "c:\\Dev\\terraform\\modules\\functionApp"
  app_name     = "BadgerDuck"
  regions      = ["uksouth", "ukwest"]
}</code></code></pre><p>To switch to use of the url versions update the source from a folder to a url:</p><h3><strong>https</strong></h3><pre><code><code>module "az-function-app" {
  source       = "https://github.com/codewithadam/terraform-common-modules.git"
  app_name     = "BadgerDuck"
  regions      = ["uksouth", "ukwest"]
}</code></code></pre><h3><strong>ssh</strong></h3><pre><code><code>module "az-function-app" {
  source       = "git@github.com:codewithadam/terraform-common-modules"
  app_name     = "BadgerDuck"
  regions      = ["uksouth", "ukwest"]
}</code></code></pre><p>If you are lucky enough to be using an unathenticated repository then all your need to do is run <code>terraform init</code> and your modules should download and work as they did before, if however you are using an authenticated repo using ssh then further steps are required, this is how I ended up setting this up.</p><h2><strong>Setup SSH Keys</strong></h2><p>To allow your local machine or build agent to access the authenticated repo via ssh, you&#8217;ll need to setup an SSH key, I&#8217;ll take you through the steps to set this up in Azure DevOps and in Github.</p><h2><strong>Generate an SSH Key</strong></h2><p>The very first thing you&#8217;ll need to do is to generate a public and private key pair that you can use for authenication. OpenSSH is capable of doing this and is available on most systems. Open a command prompt and run the following command, replace the with a string that works for you, I used the name of the module collection.</p><pre><code><code>ssh-keygen -C "&lt;your filename&gt;"</code></code></pre><p>This command will ask you where you want the files to be saved, enter a full folder path and file name <code>c:\ssh-keys\&lt;string&gt;</code>, you&#8217;ll also be asked for a passphrase, this is simply a password to protect your keys, you can immediate hit enter twice and not enter one or you can take the extra step and setup a password. Once all done you&#8217;ll get two files</p><ul><li><p>- this will be your private key</p></li><li><p>.pub - this will be your public key</p></li></ul><h2><strong>Adding your key to Azure DevOps</strong></h2><p>Open up the Azure DevOps Port, and click on the settings icon at the top right of the screen and go to &#8220;SSH Public Keys&#8221;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qe3L!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d34a8c5-bd90-4838-93ff-f52f086b42f7_229x408.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qe3L!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d34a8c5-bd90-4838-93ff-f52f086b42f7_229x408.jpeg 424w, https://substackcdn.com/image/fetch/$s_!qe3L!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d34a8c5-bd90-4838-93ff-f52f086b42f7_229x408.jpeg 848w, https://substackcdn.com/image/fetch/$s_!qe3L!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d34a8c5-bd90-4838-93ff-f52f086b42f7_229x408.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!qe3L!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d34a8c5-bd90-4838-93ff-f52f086b42f7_229x408.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qe3L!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d34a8c5-bd90-4838-93ff-f52f086b42f7_229x408.jpeg" width="229" height="408" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4d34a8c5-bd90-4838-93ff-f52f086b42f7_229x408.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:408,&quot;width&quot;:229,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;GitHub public SSH keys&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="GitHub public SSH keys" title="GitHub public SSH keys" srcset="https://substackcdn.com/image/fetch/$s_!qe3L!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d34a8c5-bd90-4838-93ff-f52f086b42f7_229x408.jpeg 424w, https://substackcdn.com/image/fetch/$s_!qe3L!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d34a8c5-bd90-4838-93ff-f52f086b42f7_229x408.jpeg 848w, https://substackcdn.com/image/fetch/$s_!qe3L!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d34a8c5-bd90-4838-93ff-f52f086b42f7_229x408.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!qe3L!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d34a8c5-bd90-4838-93ff-f52f086b42f7_229x408.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Click on the &#8220;New Key&#8221; button, enter a name for your public key, make it make sense as to what it&#8217;s for, then put into the &#8220;public Key Data&#8221; section the contents of the <code>&lt;your filename&gt;.pub</code> file, this will start with <code>ssh-rsa</code> your account is now setup to use SSH keys.</p><h2><strong>Adding your key to GitHub</strong></h2><p>Within github, go to your repository, click on settings, then on the left click on Deploy Keys.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zPlo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefc0ce1b-50fc-4904-904b-da044fbdf063_1186x528.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zPlo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefc0ce1b-50fc-4904-904b-da044fbdf063_1186x528.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zPlo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefc0ce1b-50fc-4904-904b-da044fbdf063_1186x528.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zPlo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefc0ce1b-50fc-4904-904b-da044fbdf063_1186x528.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zPlo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefc0ce1b-50fc-4904-904b-da044fbdf063_1186x528.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zPlo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefc0ce1b-50fc-4904-904b-da044fbdf063_1186x528.jpeg" width="1186" height="528" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/efc0ce1b-50fc-4904-904b-da044fbdf063_1186x528.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:528,&quot;width&quot;:1186,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;GitHub SSH deploy key&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="GitHub SSH deploy key" title="GitHub SSH deploy key" srcset="https://substackcdn.com/image/fetch/$s_!zPlo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefc0ce1b-50fc-4904-904b-da044fbdf063_1186x528.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zPlo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefc0ce1b-50fc-4904-904b-da044fbdf063_1186x528.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zPlo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefc0ce1b-50fc-4904-904b-da044fbdf063_1186x528.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zPlo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefc0ce1b-50fc-4904-904b-da044fbdf063_1186x528.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Click <code>Add deploy key</code>, in the title field, give it a meaningful name, enter the contents of your <code>&lt;your filename&gt;.pub</code> file which will start with <code>ssh-rsa</code>, you can then select <code>Allow write access</code> if you need your pipeline to be able to make changes to the repo, otherwise leave it unchecked for read access. Then click <code>Add key</code>.</p><h2><strong>Accessing Terraform Modules in Azure DevOps Pipelines</strong></h2><p>With our Terraform code files referencing git repos for their supporting modules and with our SSH keys setup, we can now look at getting Azure DevOps Pipeslines setup to be able to run these Terraform build and deployments.</p><p>As our repositories are all authenticated and used SSH we need to setup our pipelines with the correct information to connect over SSH.</p><h3><strong>Upload your private key</strong></h3><p>Firstly, we need to add our private key, the one we generated earlier into Azure DevOps, without it, we won&#8217;t be able to connect. Now being a secure key, we need to store this securely. Thankfully, Azure Pipelines has a method to handle this, within the Library we can upload a secure file, isn&#8217;t that handy!</p><p>In your Azure DevOps project, go to the pipelines section, then select Library, in the top menu you&#8217;ll see a section called <code>Secure Files</code> click this.</p><p>Click on the <code>+ Secure File</code> button and upload the <code>&lt;your filename&gt;</code> from before, not the one ending in .pub.</p><h3><strong>Known Host</strong></h3><p>This is something, that I missed on the first go at this. An important bit of information that Azure Devops needs, is the known host entry. This information identifies the server we wish to connect to and tells Azure DevOps to trust it, this stops it prompting us to ask if it&#8217;s ok to connect, which is ideal, as during a build/release we have no way to answer that prompt.</p><p>To get the known hose you need to run the following command:</p><pre><code><code>ssh-keyscan &lt;hostname of your repo&gt;</code></code></pre><p>replace <code>&lt;hostname of your repo&gt;</code> with the actual hostname of the repo you are connecting to. For example, if you were using github like us then it&#8217;ll be <code>github.com</code> or it&#8217;ll be <code>ssh.dev.azure.com</code> if you are using an Azure DevOps repo. This command will return a block of text, you&#8217;ll need to copy this test, see the bits highlight below.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2jOC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e3d1b42-08b5-4298-b1af-f11051b32f5a_976x146.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2jOC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e3d1b42-08b5-4298-b1af-f11051b32f5a_976x146.jpeg 424w, https://substackcdn.com/image/fetch/$s_!2jOC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e3d1b42-08b5-4298-b1af-f11051b32f5a_976x146.jpeg 848w, https://substackcdn.com/image/fetch/$s_!2jOC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e3d1b42-08b5-4298-b1af-f11051b32f5a_976x146.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!2jOC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e3d1b42-08b5-4298-b1af-f11051b32f5a_976x146.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2jOC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e3d1b42-08b5-4298-b1af-f11051b32f5a_976x146.jpeg" width="976" height="146" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3e3d1b42-08b5-4298-b1af-f11051b32f5a_976x146.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:146,&quot;width&quot;:976,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;GitHub SSH deploy key&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="GitHub SSH deploy key" title="GitHub SSH deploy key" srcset="https://substackcdn.com/image/fetch/$s_!2jOC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e3d1b42-08b5-4298-b1af-f11051b32f5a_976x146.jpeg 424w, https://substackcdn.com/image/fetch/$s_!2jOC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e3d1b42-08b5-4298-b1af-f11051b32f5a_976x146.jpeg 848w, https://substackcdn.com/image/fetch/$s_!2jOC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e3d1b42-08b5-4298-b1af-f11051b32f5a_976x146.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!2jOC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e3d1b42-08b5-4298-b1af-f11051b32f5a_976x146.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h3><strong>Create the DevOps Pipeline</strong></h3><p>We now have done all the pre-work and have the values we need to successfully create a pipeline. The very first thing we&#8217;ll need to do is setup some variables to hold these values. Because these values are all sensitive varibles, we&#8217;ll create them using the variable pane in the DevOps pipelines, we&#8217;ll need to create the following varibles.</p><ul><li><p><code>known_host</code> - this will contain the text collected from the ssh-keyscan we collected above</p></li><li><p><code>ssh_public_key</code> - this will contain the contents of your <code>&lt;your filename&gt;.pub</code></p></li><li><p><code>ssh_passphrase</code> - if you entered a passphrase to protect your generated key, you should enter this here.</p></li><li><p><code>ssh_privateKeyName</code> - this will be the name of <code>&lt;your filename&gt;</code> which you uploaded to secure files</p></li></ul><p>With the variables configured, we can move onto setting up the pipelines, the first task we&#8217;ll need, is a take to install an ssh key, without this Terraform won&#8217;t be able to use our SSH keys. We&#8217;ll need to use the <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/install-ssh-key?view=azure-devops">Install SSH Key Task</a>. The Yaml will look something like this.</p><pre><code><code>task: InstallSSHKey@0
  inputs:
    knownHostsEntry: $(known_host)
    sshPublicKey: $(ssh_public_key)
    sshPassphrase: $(ssh_passphrase)
    sshKeySecureFile: $(ssh_privateKeyName)</code></code></pre><h3><strong>Deploy your Terraform config</strong></h3><p>With the SSH keys setup and our Terraform files all referencing the git repo using SSH, that&#8217;s it, it should just work now. Now when we run Terraform init as a part of the build pipeline it will use the installed keys to automatically checkout your modules directly from git.</p><p>Congratulations! Hopefully, you found this without spending hours look into why it&#8217;s not working!</p><h2><strong>Versions and Branching</strong></h2><p>The url&#8217;s mentioned above will checkout your modules from the main/master branch of your repository. Sometimes you&#8217;ll want to checkout a specific branch or tag. Such as:</p><ul><li><p>you are developing a new version of the module on a branch and you want to test it</p></li><li><p>you&#8217;ve tagged a release of your module with a version number and want to lock yourself to that version</p></li></ul><p>Both very sensible reasons, if you want to use a specific version or tag then all you need to do is to amend the url used in your Terraform configurations to use the <code>ref</code> attribute, here you can specify the branch or tag name.</p><h3><strong>getting a specific branch</strong></h3><pre><code><code>module "az-function-app" {
  source       = "git@github.com:codewithadam/terraform-common-modules?ref=develop"
  app_name     = "BadgerDuck"
  regions      = ["uksouth", "ukwest"]
}</code></code></pre><h3><strong>getting a specific tag</strong></h3><pre><code><code>module "az-function-app" {
  source       = "git@github.com:codewithadam/terraform-common-modules?ref=v1.0.0"
  app_name     = "BadgerDuck"
  regions      = ["uksouth", "ukwest"]
}</code></code></pre><h2><strong>Gotchas</strong></h2><p>Terraform expects you to have 1 git repo per module, if you don&#8217;t conform to this you&#8217;ll get an error <code>Error: Failed to download module</code> with a further explanation <code>fatal: Could not read from remote repository</code>.</p><p>If you look at how The Terraform registry is laid out this will make sense.</p><h2><strong>Wrap Up</strong></h2><p>Hopefully this article has been helpful,I was stumped for a while, hence this article, now I can&#8217;t take entire credit for this, as the missing piece of the puzzle was given to me by <a href="https://samcogan.com/using-terraform-modules-from-git-in-azure-devops/">Sam Cogan</a>, it was his fantastic article which led me to the known_hosts issue.</p>]]></content:encoded></item><item><title><![CDATA[Generating Sas Tokens using Azure Managed Identity (User Delegation)]]></title><description><![CDATA[It&#8217;s possible with Azure Blob Storage to generate a Shared Access Signature (SAS) which you can allow any third party limited access to a blob.]]></description><link>https://www.codewithadam.com/p/generating-sas-tokens-using-azure</link><guid isPermaLink="false">https://www.codewithadam.com/p/generating-sas-tokens-using-azure</guid><dc:creator><![CDATA[Adam White]]></dc:creator><pubDate>Fri, 19 Jul 2024 11:59:49 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!7gmQ!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbec5b551-5cc7-4943-bb9d-e64eb25a0f65_144x144.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It&#8217;s possible with Azure Blob Storage to generate a <a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview">Shared Access Signature (SAS)</a> which you can allow any third party limited access to a blob. This access can be limited in time and actions such as Read, Write, or more to a specific file held within blob storage. You can also provide access to the entire blob container if you wish.</p><p>There is a new way to generate in the new <a href="https://docs.microsoft.com/en-us/dotnet/api/overview/azure/storage">Blob Storage SDK</a>, and the big thing here is the ability to generate those SAS tokens without a storage account key.</p><h2><strong>User Delegation SAS</strong></h2><p>The typical way to generate a SAS token in code requires the storage account key. This assumes you have the storage account key, and there are scenarios where you just won&#8217;t have access to that. This is the situation I found myself in. You&#8217;ll find many answers online, but none of them lead you down the right path if you are as unlucky as I was. If you need to use <a href="https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview">&#8220;Managed Identity&#8221; </a>to control access to your storage accounts in code, which is something I highly recommend wherever possible as this is a security best practice. In this scenario, you won&#8217;t have a storage account key, so you&#8217;ll need to find another way to generate the shared access signatures.</p><p>To do that, we need to use an approach called <a href="https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-user-delegation-sas-create-dotnet">&#8220;user delegation&#8221; SAS </a>. By using a user delegation SAS, we can sign the signature with the Azure Ad credentials instead of the storage account key.</p><p>I&#8217;ll show you below exactly what code you need to generate the user delegation SAS URI with the .Net storage SDK; I&#8217;ll also cover a few gotchas that caught me out and how to test this within visual studio locally.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.codewithadam.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><h2><strong>Generating a User Delegation SAS</strong></h2><p>Connecting to Azure Storage using Entra Id (Azure Active Directory) Credentials is made incredibly easy thanks to the <a href="https://docs.microsoft.com/en-us/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet">DefaultAzureCredential</a>. This helper class tries a variety of different techniques to source the credentials required to access a storage account.</p><p>Firstly it checks for environment variables. If these aren&#8217;t present, it attempts to use a managed identity (this is what you want in production!). Should that fail, it has a range of fallback options that it will try, and these are great for local development. It can use the credentials you logged into visual studio, Visual Studio (VS) Code, Azure CLI with. So in a range of development environments, this will work. I&#8217;ll show you below how to set the managed identity you want to use in Visual Studio, which can be the one you&#8217;ve logged in with or something else entirely.</p><p>Here&#8217;s how you would use the <code>DefaultAzureCredential</code> to create a <code>BlobServiceClient</code>:</p><pre><code><code>var storageAccountName  = "ducksandbadgersstore";
var storageAccountUriString = $"https://{storageAccountName}.blob.core.windows.net";
var credential = new DefaultAzureCredential();
var blobServiceClient = new BlobServiceClient(new Uri(storageAccountUriString), credential);</code></code></pre><p>With that, you&#8217;ve successfully created a blob service client using managed identity. You can test this by uploading a file into the storage account.</p><pre><code><code>var blobContainerClient = blobServiceClient.GetBlobContainerClient("duckcontainer");
var blobClient = blobContainerClient.GetBlobClient("duck.txt");
if(!await blobClient.ExistsAsync())
{
    using var ms = new MemoryStream(Encoding.UTF8.GetBytes("This is my secret blob"));
    await blobClient.UploadAsync(ms);
}</code></code></pre><p>All being well, you should have a file in your blob storage container. Which proves that managed identity is working.</p><p>Now let&#8217;s generate a shared access signature. The first start is to create a <a href="https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-user-delegation-sas-create-dotnet">user delegation key</a>.</p><p>The duration of this SAS token can only be set to a <strong>maximum of 7 days</strong> Otherwise, you&#8217;ll get an error if you request a longer duration. You also get an error if you mess up the dates, I sent in the same start and end date of &#8220;Now&#8221; due to a config issue. When that happens, you&#8217;ll see an error with an HTTP status code of 400 (bad request) with the error being <code>"The value for one of the XML nodes is not in the correct format,"</code> which isn&#8217;t the most helpful error. If you get this, check the values, make sure they make sense! Asking for a SAS token that immediately expires isn&#8217;t sensible!</p><p>To get the user delegation key is this simple:</p><pre><code><code>var userDelegationKey = await blobServiceClient
    .GetUserDelegationKeyAsync(DateTimeOffset.UtcNow, 
                               DateTimeOffset.UtcNow.AddDays(7));</code></code></pre><p>We can use the user delegation key with the BlobSasBuilder and BlobUriBuilder helpers to generate the SAS token URI. You can access the file or provide a token allowing someone write access to upload a file to your container. In the example below, I&#8217;m asking for a SAS token that&#8217;s valid for 7 days for a specific file. Do not that the SAS token doesn&#8217;t need to have the same lifetime as the user delegation key, but it <strong>cannot be longer</strong> If you attempt to create a SAS token URI with a lifespan that&#8217;s longer than the lifespan of the user delegation key, you will get a 403 error response.</p><pre><code><code>var sasBuilder = new BlobSasBuilder()
{
    BlobContainerName = blobClient.BlobContainerName, // duckcontainer from above
    BlobName = blobClient.Name, // duck.text from above
    Resource = "b", // b for blob, c for container
    StartsOn = DateTimeOffset.UtcNow,
    ExpiresOn = DateTimeOffset.UtcNow.AddDays(7),
};

sasBuilder.SetPermissions(BlobSasPermissions.Read |
                        BlobSasPermissions.Write); // read and write permissions

var blobUriBuilder = new BlobUriBuilder(blobClient.Uri)
{
    Sas = sasBuilder.ToSasQueryParameters(userDelegationKey,
                                        blobServiceClient.AccountName)
};

var sasUri = blobUriBuilder.ToUri();</code></code></pre><p>The sasUri can be used to download the file until the SAS token expires or the user delegation key expires. Whichever happens first will invalidate this SAS token.</p><p>To test the SAS token URI, here&#8217;s a simple bit of code that will download the file&#8217;s contents.</p><pre><code><code>var httpClient = new HttpClient();
try
{
    var blobContentsString = await httpClient.GetStringAsync(sasUri).ConfigureAwait(false);
    Console.WriteLine(blobContentsString);
}
catch (HttpRequestException e)
{
    Console.WriteLine("Sas token failed - Unable to download: " + e.Message);
}</code></code></pre><h2><strong>Testing Locally</strong></h2><p>If you attempt to test this locally or use a service in Azure, you may find that this doesn&#8217;t work. There&#8217;s a reason for that. You need to give the identity accessing the storage account some RBAC permissions.</p><ul><li><p>Storage Account Contributor</p></li><li><p>Storage Blob Data Contributor</p></li></ul><p>This may surprise you, as being the owner of the storage account isn&#8217;t sufficient.</p><p>You can test this in Azure CLI and even see if you can tighten the permissions by attempting the following.</p><pre><code><code>$ACCOUNT_NAME = "ducksandbadgersstore"
$CONTAINER_NAME = "duckcontainer"

# use this to test if you have the correct permissions
az storage blob exists --account-name $ACCOUNT_NAME `
                        --container-name $CONTAINER_NAME `
                        --name duck.txt --auth-mode login</code></code></pre><p>If you haven&#8217;t assigned the right RBAC permissions within the storage account, the above will fail. Head on into Azure, go to your storage account, then Access Control (IAM), then role assignments to set the permissions. Take a look at <a href="https://docs.microsoft.com/en-us/azure/app-service/scenario-secure-app-access-storage">this article</a> to see the various ways to grant the relevant RBAC permissions.</p><p>I personally do this within the CLI myself, and it&#8217;s this simple. Firstly look you our Azure AD object ID using our email address:</p><pre><code><code>$EMAIL_ADDRESS = 'adam@redturtlesoftware.com'
$OBJECT_ID = az ad user list --query "[?mail=='$EMAIL_ADDRESS'].objectId" -o tsv</code></code></pre><p>Now we need the id of the storage account to set the RBAC permissions on:</p><pre><code><code>$STORAGE_ID = az storage account show -n $ACCOUNT_NAME --query id -o tsv</code></code></pre><p>This will return a string that contains the subscriptionId, resource group, and the storage account name. For example:<code>/subscriptions/770476dd-69ac-465d-96cc-gh12bc676chk/resourceGroups/badger-rg/providers/Microsoft.Storage/storageAccounts/ducksandbadgersstore</code></p><p>We can add the RBAC permissions <code>Storage Blob Data Contributor</code> role scoped to this storage account container with this information.</p><pre><code><code>az role assignment create `
    --role "Storage Blob Data Contributor" `
    --assignee $OBJECT_ID `
    --scope "$STORAGE_ID/blobServices/default/containers/$CONTAINER_NAME"</code></code></pre><p>If this still doesn&#8217;t work for you, there is a gotcha to get around when working within visual studio. The DefaultAzureCredential may not select the correct Azure AD tenant id. There are two ways to get around this. Firstly you can set it in code like this:</p><pre><code><code>var azureCredentialOptions = new DefaultAzureCredentialOptions();
azureCredentialOptions.VisualStudioTenantId = "5546dcdg-6581-66f0-a200-da76560045433s";
var credential = new DefaultAzureCredential(azureCredentialOptions);</code></code></pre><h2><strong>Setting Visual Studio&#8217;s Managed Identity</strong></h2><p>Secondly, you can go into Visual Studio, go to Tools -&gt; Options. Then Azure Service Authentication. It&#8217;s here where you can select an account. Or even add another account. I mentioned above about using an account that isn&#8217;t specifically the one you are logged into Visual Studio with. This is the settings section you would use to set another managed identity to use.</p><h2><strong>Summary</strong></h2><p>Managed Identities can sound a bit scary to those who haven&#8217;t used them, but actually, they are incredibly simple to use and end up making code much simpler and more secure. In the past, I&#8217;ve had to manage load balancing of storage queues and passing in storage account keys by using Azure Key Vault. It&#8217;s just a complication you can get rid of.</p><p>On top of that, Managed Identities are a much more secure way for you to access your cloud resources, giving a fine grained control as to what can be done to those resources, an account key gives complete access, RBAC gives you the ability to use the principle of least privilege. Sure, when it comes to generating SAS tokens, there is an additional hoop to jump through, but it&#8217;s still less of a hassle than setting up KeyVault and passing the secret into the code.</p><p>Hopefully, the steps above have shown you how to set the correct RBAC roles to your local user or managed identity. The C# example code is enough to help you generate that user delegation key, which is the key to SAS token generation with managed identity.</p><p>You are limited to a lifetime of 7 days with this approach, but that&#8217;s a good thing. The longer something is open, the more likely it will be attacked. It&#8217;s best practice to only have your SAS tokens alive for the shortest amount of time necessary.</p>]]></content:encoded></item></channel></rss>