Sound Code - Mark Heath's Bloghttps://markheath.net/Mark Heath's development blog.MarkBlog2024-01-08T00:00:00Zhttps://markheath.net/post/stream-read-guidelinesSome Stream.Read Guidelines2024-01-08T00:00:00Z2024-01-08T00:00:00ZMark Heathtest@example.com<p>I recently had to fix a bug due to incorrect use of <code>Stream.Read</code>, and it struck me that quote often I see bugs and performance issues related to poorly written code reading from streams. This is often in code using <a href="https://github.com/naudio/NAudio">NAudio</a>, which has a stream-inspired API, but also streams are very commonly used in all kinds of .NET applications.</p>
<p>So in this post, I want to quickly suggest a few guidelines for better reading from streams in C#.</p>
<pre><code class="language-cs">public abstract int Read(byte[] buffer, int offset, int count);
</code></pre>
<p>For reference, <a href="https://learn.microsoft.com/en-us/dotnet/api/system.io.stream.read?view=net-8.0#system-io-stream-read(system-byte()-system-int32-system-int32)">the <code>Stream.Read</code> method shown above</a> has been part of .NET since the very beginning. It is still very commonly used although there are some additional overloads and alternatives that we'll discuss shortly that are often better choices.</p>
<h3 id="dont-ignore-the-return-value">Don't ignore the return value</h3>
<p>The <code>Read</code> method returns an integer that indicates how many bytes were actually read from the stream. A common source of bugs is ignoring this return value and assuming it is the same as the <code>count</code> parameter. However, as is documented, it is possible for <code>Read</code> to return fewer bytes than were requested even if the end of the stream was not reached:</p>
<blockquote>
<p>"An implementation is free to return fewer bytes than requested even if the end of the stream has not been reached."</p>
</blockquote>
<p>Now there is a <a href="https://learn.microsoft.com/en-us/dotnet/api/system.io.stream.readexactly?view=net-8.0">new method called <code>ReadExactly</code></a> which was introduced in .NET 7, which will read exactly the number of bytes you asked for and throw an exception if that is not possible. And although there are situations in which this might be useful, there are some other considerations before selecting it that we'll discuss next.</p>
<h3 id="avoid-trying-to-read-an-entire-stream-in-one-call-to-read">Avoid trying to read an entire stream in one call to Read</h3>
<p>One thing I very commonly see in both NAudio and other C# code that uses streams is developers attempting to read the entire stream in a single call to <code>Read</code>. This typically happens when we somehow already know how many bytes are (or should be) in the stream.</p>
<p>Asides from the bug mentioned earlier where you shouldn't necessarily expect <code>Read</code> to return the whole stream even if you asked for it, reading an entire stream in one call to <code>Read</code> often defeats the purpose of using streams in the first place, which is for efficiency purposes.</p>
<p>A stream allows us to read data in chunks, processing it bit by bit, removing the need to hold everything in memory and allowing us to stop processing early if it turns out we didn't need everything. By reading an entire stream into memory up front, you miss out on these performance advantages.</p>
<h3 id="use-readasync">Use ReadAsync</h3>
<p>I should also mention, that in most cases its more appropriate to use the <a href="https://learn.microsoft.com/en-us/dotnet/api/system.io.stream.readasync?view=net-8.0"><code>ReadAsync</code> overload</a>. This is because behind the scenes with streams there is almost always some kind of network or disk IO going on behind the scenes. This makes more efficient use of threads as well as allows for cancellation.</p>
<h3 id="avoid-unnecessary-buffer-allocation">Avoid unnecessary buffer allocation</h3>
<p>Another common problem I often see with calls to <code>Stream.Read</code> or related methods, is unnecessarily allocating a memory buffer for every call to read into. Not only is it usually possible to reuse the same buffer on each call to <code>Read</code>, but there is now also support for <a href="https://learn.microsoft.com/en-us/dotnet/api/system.io.stream.read?view=net-8.0#system-io-stream-read(system-span((system-byte)))"><code>Span<byte></code> on <code>Read</code></a> or <a href="https://learn.microsoft.com/en-us/dotnet/api/system.io.stream.readasync?view=net-8.0#system-io-stream-readasync(system-memory((system-byte))-system-threading-cancellationtoken)"><code>Memory<byte></code> on <code>ReadAsync</code></a>, giving you more flexibility over the memory you choose to use as a buffer (learn more about guidelines for these types <a href="https://learn.microsoft.com/en-us/dotnet/standard/memory-and-spans/memory-t-usage-guidelines">here</a>). You could even consider using <a href="https://learn.microsoft.com/en-us/dotnet/api/system.buffers.arraypool-1.rent?view=net-8.0"><code>ArrayPool.Rent</code></a> as an alternative to maintaining your own reusable memory buffer.</p>
<h3 id="summary">Summary</h3>
<p>A stream-based programming model allows you to write efficient code to process large amounts of data, but only if you take care to do so. Hopefully this post has provided a few helpful pointers, and let me know in the comments if you have any additional recommendations of your own.</p>
https://markheath.net/post/2023-in-review2023 in review2024-01-01T00:00:00Z2024-01-01T00:00:00ZMark Heathtest@example.com<p>Happy new year everyone. It's time for (yet) another <a href="https://markheath.net/category/year%20in%20review">year in review</a> post. This year in some ways has felt like a slightly less productive year than normal in terms of blogging, speaking and creating courses. There's been several reasons for that.</p>
<p>One was that unfortunately I had a lot of back pain this year, which is still not completely gone and that certainly had an impact on what I was able to accomplish. Another was that a lot of what I've done in my day-job this year has been using AKS and Azure pipelines. And although I normally like to blog a about what I've learned how to do, a lot of the time I've not been completely sure I've done things in the best way, and so I've often felt like I'm not quite ready to write tutorials!</p>
<p>A more positive reason was that once again I have enjoyed devoting a bit more of my free time to music performance and production, which is one of my favourite hobbies - I played live 31 times in 2023 and managed to keep my habit of <a href="https://open.spotify.com/artist/4036iD5XfdOJvs4MNVZlSY?si=i_oyYV_XRte3lnWbSBbePQ">writing and recording one song a month going</a> in an attempt to improve my skills. As much as I do enjoy creating programming related content, there can be a risk of burnout if you do too much of it, so I'm ensuring I make time for other creative pursuits as well.</p>
<p>I've also not published any new <a href="https://pluralsight.pxf.io/c/1192349/424552/7490?u=www%2Epluralsight%2Ecom%2Fauthors%2Fmark-heath">courses with Pluralsight</a> this year, for the first time in many years. That was partly due to some internal restructuring going on at Pluralsight, but it is looking like I will be producing another course next year, so I will keep you updated on that.</p>
<h3 id="conferences">Conferences</h3>
<p>I am grateful to have been renewed as a Microsoft MVP for the 7th year, which allowed me to once again attend (virtually) the MVP Summit. Obviously this year, AI was a major theme, and I have already had some opportunities to try out the <a href="https://azure.microsoft.com/en-us/products/ai-services/openai-service">Azure OpenAI service</a>. It will be interesting to see what happens this year as developers try to work out how to make the best use of the power of AI whilst understanding where the limitations and risks of using it lie. It was also great to see the release of <a href="https://learn.microsoft.com/en-us/dotnet/core/whats-new/dotnet-8">.NET 8</a> and I think <a href="https://learn.microsoft.com/en-us/dotnet/aspire/get-started/aspire-overview">.NET Aspire</a> has potential to greatly improve the experience of developing cloud native services using .NET.</p>
<p>In August I had the privilege of attending the first ever <a href="https://cphdevfest.com/">Copenhagen Developers Festival</a>, and <a href="https://www.youtube.com/watch?v=hMfSLhitcHk">spoke about audio in .NET</a>. It was particularly great to be at a conference that made space for several talks about creative uses of technology (thanks especially to Dylan Beattie), and I hope to see that trend continue. I also was able to speak about Dapr at <a href="https://www.meetup.com/developersouthcoast/events/290665726/">Developer South Coast</a> earlier in the year.</p>
<p><iframe src="https://www.youtube.com/embed/hMfSLhitcHk" class="youtube" width="500" height="281" frameborder="0" allowfullscreen=""></iframe></p>
<h3 id="plans-for-2024">Plans for 2024</h3>
<p>This year, I'll be <a href="https://ndclondon.com/agenda/is-net-any-good-for-audio-0ss5/0ru4fpjs5mm">giving my audio talk again at NDC London</a> as well as working towards another Pluralsight course. I've also been doing a bit more audio-related coding recently (although not in C#) and maybe some of that will eventually turn into articles on my blog when its ready to share.</p>
<p>As always, I'm very thankful for everyone who has helped and supported me this year, whether by watching my courses, reading my tutorials, attending talks or giving me feedback. I know that many of you had a particularly difficult year last year, and I hope that you have a safe and healthy 2024.</p>
https://markheath.net/post/what-are-we-optimizing-forWhat are we optimizing for?2023-12-26T00:00:00Z2023-12-26T00:00:00ZMark Heathtest@example.com<p>Over the last few years, most December's I have attempted the amazing "<a href="https://adventofcode.com/">Advent of Code</a>" challenge. This daily set of puzzles is a great way to sharpen your coding skills or perhaps learn a new language. This year I sadly didn't have the time to participate although I did help out a few friends with some of the challenges.</p>
<p>One of the interesting questions that Advent of Code raises about coding in general, "is what we should be trying to optimize for"? Is it purely about completing the challenge in as short a time as possible? Or should we try to complete the challenge in the fewest lines of code? Maybe the best solution is the one that solves the problem the fastest, or the one that has the most elegant design? Or perhaps it is the solution that is most extensible or reusable?</p>
<p>Clearly, the criteria you consider most important will have a big impact on how you tackle a programming task. And very often these goals compete with each other, making it important to be clear in your mind about which is your top priority.</p>
<p>In this post, let's briefly consider several possible goals that we might attempt to optimize our development process for.</p>
<h3 id="speed-of-coding">1. Speed of coding</h3>
<p>Although Advent of Code is an artificial challenge, where you're competing against other coders to gain a place on the global leaderboard (something I only achieved once as you have to get up very early even to stand a chance!), the pressure to get a coding task completed as quickly as possible can be very strong in a business context.</p>
<p>Clearly, getting code written as quickly as possible is very valuable to a business. It means you can be quick to market and move on to tackle the next priority. The big downside is that prioritizing speed above all else is a recipe for introducing <a href="https://markheath.net/category/technical%20debt">Technical Debt</a>, which can dramatically slow down future development.</p>
<p>For this reason, getting things done quickly should not be considered the highest priority for development tasks (with the possible exception of fixing a critical bug in production).</p>
<h3 id="terseness-of-code">2. Terseness of code</h3>
<p>You might have heard of "<a href="https://codegolf.stackexchange.com/">code golf</a>" which is a challenge some coders enjoy where they try to solve a given problem in the fewest characters. The solutions that expert code golfers come up with are very often incredibly impressive and compact compared to the way that a regular coder might attempt the same task. But the trade-off is often readability - despite being a small amount of code, it can be almost incomprehensible to other developers.</p>
<h3 id="readability-clean-code">3. Readability (clean code)</h3>
<p>At the opposite end of the spectrum is the desire to maximize "readability" (sometimes referred to as "<a href="https://www.amazon.co.uk/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882">clean code</a>"). With clean code, you make it your goal that the next developer to <em>read</em> the code (which might be you in a few years time) has the best possible chance of understanding it.</p>
<p>This means ensuring that methods and variables are named well, comments are provided where they can add important context, and that methods are kept short and simple.</p>
<p>Generally speaking, I am a big fan of aiming for clean code, although I have seen it backfire, with developers writing dozens of simple "clean" methods spread across multiple classes, for something that would arguably have been easier to read had it been kept together in a single method.</p>
<h3 id="speed-of-execution-performance">4. Speed of execution (performance)</h3>
<p>The Advent of Code challenges often include a performance element to them. Often part 1 of the daily challenge can be solved with a naive brute force method, but part 2 requires you to optimize your algorithm in order to solve it in a reasonable amount of time.</p>
<p>This mirrors something many developers have experienced, where the small amount of test data you used to check your work performs perfectly adequately, but in the real world as the load increases, an algorithm that might have worked just fine for 100 items, becomes unusably slow when there are 10,000.</p>
<p>Of course, the best way to approach performance is to actually measure it. Often as developers we assume we can correctly guess what the performance of our code will be, or where the bottleneck is, or what needs to be done to improve it. In reality this often leads to pointless over-complexity optimizing something that didn't need it, as well as completely missing were the real performance issue lies.</p>
<p>I've noticed that its often several years into the development of a large enterprise system before performance becomes a major focus. In the early stages, your focus is on gaining market share, and your number of end users may be small. But once you have become an established market leader, the focus switches to being as profitable as possible, which performance optimization can help with by driving down hosting costs. Also the more customers you have, the more performance bottlenecks you are likely to have to find and deal with.</p>
<h3 id="extensibility">5. Extensibility</h3>
<p>Each Advent of Code daily challenge has two parts. Part 1 introduces the problem, but part 2 changes the requirements in some way. This rewards you for solving part 1 in an extensible way. If you did a good job of it, solving part 2 is often straightforward.</p>
<p>However, there's a catch. You don't know in what way the problem will change for part 2. Often I find myself trying to predict what that will be, and that's great if you guess right. But just like the real world, requirements often change in unexpected ways. And the extensibility you built in to part 1 can end up being a pointless waste of effort.</p>
<p>This is one of the hardest balancing acts to get right in software development. Often you do have a good idea of the future direction that your code is likely to need to be extended in. So it makes a lot of sense to factor that into your design. But beware of the pitfalls - its possible that extensibility adds unnecessary complexity to your code and isn't sufficient to meet future requirements anyway.</p>
<h3 id="reusability">6. Reusability</h3>
<p>A closely related concern is reusability, and again Advent of Code often rewards you for making generic helper classes that can be applied to multiple puzzles. I found this with commonly needed classes like 2D or 3D grids, and helpers for managing coordinates.</p>
<p>By building up a library of genuinely useful utilities, you can greatly speed up future development. You'll also benefit from the code that consumes those utilities being more readable, as the implementation details are hidden away.</p>
<p>And in most enterprise projects I've worked on, its not uncommon for a set of "utils" to slowly grow with all kinds of helpful classes. There are some pitfalls though, and one is that these libraries of utilities whilst created with the best of intentions end up getting under-used because other developers are not aware of their existence, or because they have some baked-in opinions that turn out to make them not as general purpose as they need to be. Or they become so customizable (and therefore more complex to use) that its easier to just implement what you need yourself.</p>
<p>So I would beware the temptation of trying to turn everything you write into some kind of generic reusable class. I generally wait until I genuinely need to use the same class in at least two other places before it gets promoted into being part of a utilities library.</p>
<h3 id="test-coverage">7. Test Coverage</h3>
<p>One of the nice things about the Advent of Code challenges is that you are given test input for each puzzle that you can use to validate your solution before attempting it on the real input. This encourages you to take a "test-driven" approach where you write your tests first, before writing the code, resulting in a test suite that covers 100% of your code. Although at first this approach seems like it would take longer, it can actually be a huge timesaver in terms of the number of bugs it can catch and the confidence it gives you to refactor your code.</p>
<p>Taking a test-driven approach and aiming for 100% code coverage is something that would benefit every software project. Unfortunately, it is possible to do it badly. Its not uncommon to see overly complicated unit tests filled with hundreds of lines of setup of mocks, which results in fragile tests that offer very little value but require a lot of maintenance. And while I think it should always be possible to write code that is testable without compromising its design, I have seen cases where the accusation of <a href="https://martinfowler.com/articles/is-tdd-dead/">"test-induced design damage"</a> is valid.</p>
<h3 id="completeness-handling-every-possible-edge-case">8. Completeness (handling every possible edge case)</h3>
<p>The way that the Advent of Code challenges are written, you get your own personalized input that you have to solve for. This means that there might be some edge cases that might potentially be in your input data, but actually aren't there. Should you create a complete solution that would work for every possible input, or is it OK to make a custom solver that works for your specific input?</p>
<p>The same challenge exists in a lot of real-world coding scenarios. We are often receiving input data from external systems, and theoretically could receive anything. But writing code that can cope with every possible input often requires us to expend a lot of time coding for and testing scenarios that may never actually occur in the real world.</p>
<p>This is one of the reasons it is extremely important that developers and testers have as much insight as possible to the real-world data that is used in production (obviously security concerns may mean this is not always accessible). That way testing and development effort can be focused on what is actually going to be received in the real world, rather than on what is theoretically possible.</p>
<h3 id="what-else">What else?</h3>
<p>Although we've looked at multiple competing concerns that the Advent of Code challenges make us think about, there are even more that crop up in business applications. Cross-cutting concerns like <strong>security</strong> and <strong>observability</strong> also need to be factored into everything you develop, and can sometimes have a big impact on the design, performance and overall time to market.</p>
<h3 id="which-is-most-important">Which is most important?</h3>
<p>It's worth recognizing that (with the exception of terseness of code), all of the goals I've mentioned above are worth pursuing. And it's also worth recognizing that it is impossible to give them all top priority.</p>
<p>For any given development task, you ought to be clear about which of these criteria are considered most important. I'm not going to attempt to prioritize the list other than to say that normally getting good test coverage and writing readable code are going to be much higher up the list than simply getting things done as soon as possible.</p>
<p>But I'd love to hear your thoughts on this subject, so feel free to add a comment and let me know what you think is most important.</p>
https://markheath.net/post/create-zip-files-in-blob-storageCreating Zip Files in Azure Blob Storage2023-11-24T00:00:00Z2023-11-24T00:00:00ZMark Heathtest@example.com<p>If you'd like to create a zip file in Azure blob storage using the <a href="https://www.nuget.org/packages/Azure.Storage.Blobs">Azure Storage SDK</a> you can of course just create the zip locally and then upload it, but in this post I want to show how you can create a zip on the fly, and show you a potential gotcha and how to work around it.</p>
<h3 id="getting-set-up">Getting set up</h3>
<p>For this example I'll use a connection string, but of course you can use <code>DefaultAzureCredential</code> if you prefer. We'll get a container client, and create a block blob client that we're going to upload to. I'll also assume that we have a local folder of files we want to zip for the purposes of this demo.</p>
<pre><code class="language-cs">var blobServiceClient = new BlobServiceClient(connectionString);
var containerName = "my-container";
var containerClient = blobServiceClient.GetBlobContainerClient(containerName);
var blobName = "zipExperiment.zip";
var zipBlockClient = containerClient.GetBlockBlobClient(blobName);
// location of the files
var files = Directory.GetFiles(@"C:\My source files");
</code></pre>
<h3 id="writing-to-the-zip">Writing to the zip</h3>
<p>To write the zip we'll first of all open a writable stream to the blob using <code>OpenWriteAsync</code> and then use <code>ZipArchive</code> from <code>System.IO.Compression</code> to create the zip archive. Then for each file we want to add, we call <code>CreateEntry</code> to create a new entry in the zip file. Then we <code>Open</code> a writable stream for that zip entry, and copy our local file contents in.</p>
<pre><code class="language-cs">var writeOptions = new BlockBlobOpenWriteOptions(); // can customize if you like with tags etc
var index = 0; // for this demo using an incrementing number to ensure each file in the zip has a unique name
using (var blobWriteStream = await zipBlockClient.OpenWriteAsync(true, writeOptions, CancellationToken.None))
using (var zipArchive = new ZipArchive(blobWriteStream, ZipArchiveMode.Create))
{
foreach (var file in files)
{
var fileName = Path.GetFileName(file);
var entry = zipArchive.CreateEntry($"{++index}{fileName}");
using var zipEntryStream = entry.Open();
using var localFileStream = File.OpenRead(file);
await localFileStream.CopyToAsync(zipEntryStream);
}
}
</code></pre>
<p>As you can see, this is all pretty straightforward, and works just fine for a typical zip file. However, there is a gotcha that I ran into when this code was used to create a zip file with many issues.</p>
<h3 id="the-block-size-issue">The block size issue</h3>
<p>In Azure, a blob can be made up of many "blocks". There is a limit of 50,000 blocks per blob, and the default size of a block is 4MB (although <a href="https://learn.microsoft.com/en-us/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-block-blobs">larger block sizes are supported</a>). This means that in theory you could create a 200GB zip file without adjusting the defaults.</p>
<p>However, if you use the code above, every time you finish adding an entry to the zip file, you'll end up with another block. This means that if you try to create a zip file with 50,001 fairly small files, even if the resulting zip file would only be 10s of MB in size, you wouldn't be able to create the file.</p>
<p>The culprit is that the <code>Flush</code> method is being called on the writable blob stream after we've finished uploading each zip entry. And this is interpreted as meaning that we want to complete the current block, resulting in blocks that are smaller than 4MB. The trouble is that we don't have direct control over when <code>Flush</code> is being called.</p>
<p>My solution is fairly simple, I created a decorator we can use to wrap the writable blob stream that simply doesn't pass on calls to <code>Flush</code>. It's pretty straightforward - here's the code:</p>
<pre><code class="language-cs">class AvoidFlushStream : Stream
{
private readonly Stream source;
private bool disposed;
public AvoidFlushStream(Stream source)
{
this.source = source;
}
public override bool CanRead => source.CanRead;
public override bool CanSeek => source.CanSeek;
public override bool CanWrite => source.CanWrite;
public override long Length => source.Length;
public override long Position { get => source.Position; set => source.Position = value; }
public override void Flush()
{
//Console.WriteLine("Not gonna flush");
}
public override Task FlushAsync(CancellationToken cancellationToken)
{
return Task.CompletedTask;
}
public override int Read(byte[] buffer, int offset, int count)
{
return source.Read(buffer,offset,count);
}
public override long Seek(long offset, SeekOrigin origin)
{
return source.Seek(offset, origin);
}
public override void SetLength(long value)
{
source.SetLength(value);
}
public override void Write(byte[] buffer, int offset, int count)
{
source.Write(buffer, offset, count);
}
public override Task WriteAsync(byte[] buffer, int offset, int count, CancellationToken cancellationToken)
{
return base.WriteAsync(buffer, offset, count, cancellationToken);
}
public override ValueTask WriteAsync(ReadOnlyMemory<byte> buffer, CancellationToken cancellationToken = default)
{
return base.WriteAsync(buffer, cancellationToken);
}
public override ValueTask DisposeAsync()
{
return base.DisposeAsync();
}
public override Task<int> ReadAsync(byte[] buffer, int offset, int count, CancellationToken cancellationToken)
{
return base.ReadAsync(buffer, offset, count, cancellationToken);
}
public override ValueTask<int> ReadAsync(Memory<byte> buffer, CancellationToken cancellationToken = default)
{
return base.ReadAsync(buffer, cancellationToken);
}
protected override void Dispose(bool disposing)
{
if (!disposed)
{
if (disposing)
{
source.Dispose();
}
else
{
base.Dispose(false);
}
disposed = true;
}
}
}
</code></pre>
<p>And now we just need to make one small adjustment to wrap the writable stream with <code>AvoidFlushStream</code>:</p>
<pre><code class="language-cs">using (var zipStream = new AvoidFlushStream(await blob1Client.OpenWriteAsync(true, writeOptions, CancellationToken.None)))
// ...
</code></pre>
<p>And with that change in place, now creating an 80Mb zip file with 50,001 entries took me 1 min 30 seconds. Without this wrapper, it took many hours before eventually failing because you'd used too many blocks.</p>
<p>Hope this is helpful to someone, and of course, if you know a better way to resolve this issue, I'd love to hear about it in the comments!</p>
https://markheath.net/post/naudio-midi-playback-soundfont-meltysynthPlaying MIDI through SoundFonts with NAudio and MeltySynth2023-08-31T00:00:00Z2023-08-31T00:00:00ZMark Heathtest@example.com<p>This week I was in Copenhagen <a href="https://cphdevfest.com/agenda/is-net-any-good-for-audio-0ca1/0rukktclrr0">speaking about NAudio</a> at the <a href="https://cphdevfest.com/">Copenhagen Developer Festival</a>.</p>
<p>I wanted to do a short demo to show off the capabilities of NAudio, and one thing I've always wished was possible with NAudio is to be about to load <a href="https://en.wikipedia.org/wiki/SoundFont">SoundFonts</a> and play MIDI messages using them. Unfortunately that's quite a large undertaking and I've never got round to implementing it.</p>
<h3 id="introducing-meltysynth">Introducing MeltySynth</h3>
<p>However, I recently stumbled across a project called <a href="https://github.com/sinshu/meltysynth">MeltySynth</a>, which builds on a couple of other C# audio projects to implement a SoundFont synthesizer and a MIDI file sequencer.</p>
<p>MeltySynth includes a simple <a href="https://github.com/sinshu/meltysynth/tree/main/Examples/NAudio">NAudio example</a>, showing how you can play a MIDI file using NAudio. The example works just great, but I also wanted to be able to live play MIDI notes from a MIDI keyboard, so I made a few enhancements.</p>
<h3 id="create-the-demo-project">Create the demo project</h3>
<p>My demo project itself is a .NET 6 Windows console application that references both MeltySynth and NAudio.</p>
<pre><code class="language-xml"><Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net6.0-windows</TargetFramework>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="MeltySynth" Version="2.4.1" />
<PackageReference Include="NAudio" Version="2.2.0" />
</ItemGroup>
</Project>
</code></pre>
<h3 id="select-a-playback-device">Select a playback device</h3>
<p>The first change I wanted to make is to enable selection of the playback device to use. The MeltySynth demo used <code>WaveOut</code> but I switched it to <code>WasapiOut</code>. The following method prints out a list of available endpoints and lets you select one:</p>
<pre><code class="language-cs">private static MMDevice GetAudioEndpoint()
{
var enumerator = new MMDeviceEnumerator();
var n = 1;
Console.WriteLine("Select output device:");
var endpoints = enumerator.EnumerateAudioEndPoints(DataFlow.Render,
DeviceState.Active).ToList();
foreach (var device in endpoints)
{
Console.WriteLine($"{n++} {device.FriendlyName}");
}
var deviceNumber = Console.ReadKey(true).KeyChar - '1';
var endpoint = endpoints.First();
if (deviceNumber >= 0 && deviceNumber < endpoints.Count)
{
endpoint = endpoints[deviceNumber];
}
Console.WriteLine($"Selected {endpoint}");
return endpoint;
}
</code></pre>
<h3 id="load-a-soundfont-and-start-playback">Load a SoundFont and start playback</h3>
<p>In the <code>Main</code> method we create a <code>MidiSampleProvider</code> (<a href="https://github.com/sinshu/meltysynth/blob/main/Examples/NAudio/MidiSampleProvider.cs">from the MeltySynth demo application</a>) and point to a SoundFont (my example uses the <code>timgm6mb</code> SoundFont that you should be able to find a <a href="https://musescore.org/en/node/200866">download link for here</a>, but any GM SoundFont will do).</p>
<p>We then select the endpoint, and initialize it with the <code>MidiSampleProvider</code> and call <code>Play</code>. This will initially play silence as there are no MIDI notes for the synthesizer to play.</p>
<pre><code class="language-cs">var player = new MidiSampleProvider("D:\\Audio\\Soundfonts\\TimGM6mb.sf2");
var endpoint = GetAudioEndpoint();
using (var waveOut = new WasapiOut(endpoint, AudioClientShareMode.Shared,
true, 20))
{
waveOut.Init(player);
waveOut.Play();
</code></pre>
<h3 id="listening-for-midi-events">Listening for MIDI events</h3>
<p>I also wanted to be able to play MIDI, and the simplest thing was to open every MIDI device and listen for events.</p>
<p>This is quite straightforward - <code>MidiIn.NumberOfDevices</code> indicates how many there are, and for each device we create a new instance of <code>MidiIn</code> and subscribe to the <code>MessageReceived</code> event. We then need to call <code>Start</code> on the <code>MidiIn</code> instance.</p>
<p>I've chosen to filter out certain MIDI messages that aren't relevant to my demo, and I also wanted to log some messages to the console for demo purposes.</p>
<p>Finally, if I decide I do want to forward the MIDI message to the SoundFon synthesizer, I call a method I've added to the <code>MidiSampleProvider</code> called <code>PlayNote</code>.</p>
<pre><code class="language-cs">var devices = new List<MidiIn>();
for (var n = 0; n < MidiIn.NumberOfDevices; n++)
{
var dev = MidiIn.DeviceInfo(n);
Console.WriteLine($"Listening on {dev.ProductName}");
var midi = new MidiIn(n);
midi.MessageReceived += (s, arg) =>
{
bool forwardMessage = true;
switch(arg.MidiEvent.CommandCode)
{
case MidiCommandCode.TimingClock:
case MidiCommandCode.AutoSensing:
case MidiCommandCode.KeyAfterTouch:
case MidiCommandCode.ChannelAfterTouch:
forwardMessage = false;
return;
case MidiCommandCode.NoteOn:
var noteOn = (NoteEvent)arg.MidiEvent;
if (noteOn.Velocity > 0)
Console.WriteLine($"Note On: {noteOn.NoteName} Velocity: {noteOn.Velocity}");
break;
case MidiCommandCode.NoteOff:
// don't log
break;
default:
Console.WriteLine(arg.MidiEvent.CommandCode);
break;
}
if (forwardMessage)
{
player.PlayNote(arg.RawMessage);
}
};
midi.Start();
devices.Add(midi);
}
</code></pre>
<h3 id="passing-a-midi-event-to-the-synthesizer">Passing a MIDI event to the synthesizer</h3>
<p>There is a slight mismatch between the ways NAudio exposes MIDI message data and the way that the MeltySynth <code>Synthesizer</code> wants it, so I take the "raw message" and break it up into its constituent parts with some bit manipulation. Then I pass it on to the <code>Synthesizer</code> by calling <code>ProcessMidiMessage</code>.</p>
<pre><code class="language-cs">public void PlayNote(int rawMessage)
{
int b = rawMessage & 0xFF;
int data1 = (rawMessage >> 8) & 0xFF;
int data2 = (rawMessage >> 16) & 0xFF;
MidiCommandCode commandCode;
int channel = 0; // zero based channel
if ((b & 0xF0) == 0xF0)
{
// both bytes are used for command code in this case
commandCode = (MidiCommandCode)b;
}
else
{
commandCode = (MidiCommandCode)(b & 0xF0);
channel = (b & 0x0F);
}
synthesizer.ProcessMidiMessage(channel, (int)commandCode, data1, data2);
}
</code></pre>
<h3 id="triggering-a-midi-file">Triggering a MIDI File</h3>
<p>MeltySynth also has the ability to sequence MIDI files - that is, to work out when each MIDI message in the file needs to be passed to the synthesizer. First, we need to load a MIDI file (note that this is the MeltySynth <code>MidiFile</code> class, not the one in NAudio).</p>
<pre><code class="language-cs">var file = "Demo.mid";
//file = @"C:\Windows\Media\flourish.mid";
var midiFile = new MidiFile(file);
</code></pre>
<p>Then to play the MIDI file, you simply pass it through to the <code>Play</code> method on the <code>MidiSampleProvider</code>, with a flag indicating if you want to loop. I made it so that if you pressed spacebar, it triggered the MIDI file, and wait for any key to stop playback.</p>
<pre><code class="language-cs">// Wait until any key is pressed.
var key = Console.ReadKey(false);
if (key.Key == ConsoleKey.Spacebar)
{
Console.WriteLine("Playing MIDI file...");
// Play the MIDI file.
player.Play(midiFile, true);
Console.ReadKey(false);
}
</code></pre>
<h3 id="cleaning-up">Cleaning up</h3>
<p>Finally, as well as disposing <code>waveOut</code> which we already put in a <code>using</code> block, we must <code>Stop</code> and <code>Dispose</code> each <code>MidiIn</code> device we opened.</p>
<pre><code class="language-cs">foreach (var midiIn in devices)
{
midiIn.Stop();
midiIn.Dispose();
}
</code></pre>
<h3 id="summary">Summary</h3>
<p>One of the coolest things about releasing your work as open source is the ways the community can build on it, extend it and integrate with it. Huge thanks to Nobuaki Tanaka (<a href="https://github.com/sinshu">sinshu</a>) for all the hard work that went into MeltySynth and how easy it was to integrate with NAudio.</p>
https://markheath.net/post/clean-up-azure-dns-zoneCleaning up dangling DNS entries for an Azure DNS Zone2023-08-22T00:00:00Z2023-08-22T00:00:00ZMark Heathtest@example.com<p><a href="https://learn.microsoft.com/en-us/azure/dns/dns-zones-records">Azure DNS Zones</a> are a convenient way to manage your DNS records. You can configure up to 10,000 "record sets" which should be more than enough, although if you are in a situation where you automate the creation and teardown of many test deployments, it can be easy to forget to delete the DNS entries for resources that no longer exist, leaving you with a <a href="https://learn.microsoft.com/en-us/azure/security/fundamentals/subdomain-takeover">"dangling DNS"</a> situation that can be a security risk.</p>
<p>A simple way to fix this is to scan your DNS Zone for all CNAME entries that point to domains that no longer exist. Here's a basic PowerShell script that does this (using <a href="https://learn.microsoft.com/en-us/powershell/module/dnsclient/resolve-dnsname?view=windowsserver2022-ps"><code>Resolve-DnsName</code></a> to test for existence) using the Azure CLI and then deletes the ones pointing to invalid domains.</p>
<pre><code class="language-powershell"># replace these example values with your own:
$zoneName = "mydomain.com"
$resourceGroupName = "mydnszonegrp"
$subscription = "3ba84cbe-9fc1-47e8-86bc-c42c584f30d1"
$records = az network dns record-set cname list --zone-name $zoneName `
--resource-group $resourceGroupName `
--subscription $subscription `
--query "[].{name:name, cname:CNAMERecord.cname}" | ConvertFrom-Json
$valid = 0
# Loop through each record and use Resolve-DnsName to check if the target domain exists
foreach ($record in $records) {
try {
# Try to resolve the CNAME record and ignore the output
Resolve-DnsName -Name $record.cname -ErrorAction Stop | Out-Null
$valid++
}
catch {
Write-Output "The target domain for $($record.name): $($record.cname) does not exist"
# Delete the record
az network dns record-set cname delete --zone-name $zoneName `
--resource-group $resourceGroupName `
--subscription $subscription `
--name $record.name --yes
}
}
Write-Output "Records still valid: $valid"
</code></pre>
<p>Of course I recommend doing a dry run before actually deleting the records. This only does CNAME records. You might also want to clean up A records, but those would be a bit harder to automatically determine if they were still valid.</p>
https://markheath.net/post/etags-patching-azure-table-storageUsing ETags and Patching Rows in Azure Table Storage2023-08-17T00:00:00Z2023-08-17T00:00:00ZMark Heathtest@example.com<p><a href="https://learn.microsoft.com/en-us/azure/storage/tables/table-storage-overview">Azure Table Storage</a> has the advantage of being very cheap, which means it often gets used when you have quite basic storage requirements, and don't want to shell out for a SQL or Cosmos database.</p>
<p>It works best when most of the time you are directly looking things up by their "key" (which in table storage is a combination of a <a href="https://learn.microsoft.com/en-us/rest/api/storageservices/understanding-the-table-service-data-model#rowkey-property">"partition key" and a "row key"</a>).</p>
<h3 id="using-etags-to-for-concurrency">Using ETags to for concurrency</h3>
<p>When you update a row (known as an "entity") in table storage, you can make use of an <a href="https://learn.microsoft.com/en-us/dotnet/api/azure.etag?view=azure-dotnet">ETag</a> to ensure that your change hasn't conflicted with someone else's change. First, you read the existing entity, which will give you the current ETag, then update the properties you wish to modify, and then call <a href="https://learn.microsoft.com/en-us/rest/api/storageservices/update-entity2">Update</a> on that entity, passing in the original ETag (which is transmitted in the <code>If-Match</code> header). If the ETag hasn't changed since you read, your update will be accepted, otherwise you'll get a HTTP 409 conflict response.</p>
<p>In this example I've declared a helper base class called <code>TableEntity</code> and my table is going to store <code>RoomEntity</code> objects:</p>
<pre><code class="language-cs">// n.b. using the Azure.Data.Tables NuGet package
public class TableEntity : ITableEntity
{
public string PartitionKey { get; set; }
public string RowKey { get; set; }
public DateTimeOffset? Timestamp { get; set; }
public ETag ETag { get; set; }
}
public class RoomEntity : TableEntity
{
public string Name { get; set; }
public string Location { get; set; }
public int CurrentTemperature { get; set; }
public bool WindowsOpen { get; set; }
}
</code></pre>
<p>And let's connect to table storage and add a new row...</p>
<pre><code class="language-cs">var connectionString = "my storage account connection string";
var tableServiceClient = new TableServiceClient(connectionString);
var patchTestTable = tableServiceClient.GetTableClient("patchtest");
await patchTestTable.CreateIfNotExistsAsync();
var partitionKey = "EXAMPLE";
var rowKey = "1010";
// clean up from any previous test runs:
await patchTestTable.DeleteEntityAsync(partitionKey, rowKey);
var entity = new RoomEntity()
{
PartitionKey = partitionKey,
RowKey = rowKey,
Name = "Cedar",
Location = "London",
CurrentTemperature = 19,
WindowsOpen = false
};
var added = await patchTestTable.AddEntityAsync(entity);
</code></pre>
<p>And if we want to see the original ETag (which typically is a string like <code>W/"datetime'2023-08-19T10%3A27%3A08.3147297Z'"</code>) we can get it like this:</p>
<pre><code class="language-cs">var initialETag = added.Headers.ETag.Value;
</code></pre>
<p>We can update it by modifying a property and calling <code>UpdateEntityAsync</code>. And I'll pass in the initial ETag to say that we should only update if no one else has changed the row since we initially created it.</p>
<pre><code class="language-cs">entity.Name = "Oak";
await patchTestTable.UpdateEntityAsync(entity, initialETag);
</code></pre>
<p>If however I do the same thing again, and pass in the <em>initial</em> ETag, the update will fail with a <code>RequestFailedException</code> with the message <em>"The update condition specified in the request was not satisfied"</em>. That's because when we updated the Name column, the entity was given a new ETag.</p>
<pre><code class="language-cs">entity.Name = "Birch";
await patchTestTable.UpdateEntityAsync(entity, initialETag);
</code></pre>
<p>Of course sometimes you don't care about conflicts, and are happy for the most recent change to win. In this case you can use <code>ETag.All</code> (which is just the string <code>"*"</code>), to indicate that you want your update to overwrite whatever was already in that row.</p>
<p>So if we change our code to use <code>ETag.All</code>, the update will go through without error.</p>
<pre><code class="language-cs">entity.Name = "Birch";
await patchTestTable.UpdateEntityAsync(entity, ETag.All);
</code></pre>
<p>However, this update runs the risk of potentially reverting someone else's change to another column (e.g. CurrentTemperature). Let's see how we can update only an individual column.</p>
<h3 id="updating-columns-individually-patching">Updating columns individually (patching)</h3>
<p>Imagine that our entities are being updated fairly regularly. We have a temperature sensor that is updating the <code>CurrentTemperature</code> column. And there is another sensor that periodically updates the <code>WindowsOpen</code> column.</p>
<p>The table storage SDK allows us to independently update the values in these columns, and in code the easiest way to do this is to define additional entities just for the columns you want to update:</p>
<pre><code class="language-cs">public class TemperatureEntity : TableEntity
{
public int CurrentTemperature { get; set; }
}
public class WindowsOpenEntity : TableEntity
{
public bool WindowsOpen { get; set; }
}
</code></pre>
<p>This allows us to use the <code>TableUpdateMode.Merge</code> flag on <code>UpdateEntityAsync</code> to only update the properties in the object we pass in. So in this example we're only setting the <code>WindowsOpen</code> flag to true.</p>
<pre><code class="language-cs">var windowsOpenUpdate = new WindowsOpenEntity()
{
PartitionKey = partitionKey,
RowKey = rowKey,
WindowsOpen = true
};
// or ETag.All
await patchTestTable.UpdateEntityAsync(windowsOpenUpdate, initialETag, TableUpdateMode.Merge);
</code></pre>
<p>The nice thing about this setup is that we can use <code>ETag.All</code> with these patches, and that saves us the time to look up the current ETag value, or the need to handle 409 conflicts, and still know that we're not going to overwrite changes to other columns in the entity.</p>
<pre><code class="language-cs">var temperatureUpdate = new TemperatureEntity()
{
PartitionKey = partitionKey,
RowKey = rowKey,
CurrentTemperature = 23
};
await patchTestTable.UpdateEntityAsync(temperatureUpdate, ETag.All, TableUpdateMode.Merge);
</code></pre>
https://markheath.net/post/stream-video-blobs-aspnetcoreStream videos from Azure Blob storage with ASP.NET Core2023-07-27T00:00:00Z2023-07-27T00:00:00ZMark Heathtest@example.com<p>Many of the projects I work on deal with large video files stored in Azure Blob Storage. And we typically wish to view those videos in a web browser.</p>
<p>Of course, it is possible to expose those files by setting the <a href="https://learn.microsoft.com/en-us/azure/storage/blobs/anonymous-read-access-configure?tabs=portal#about-anonymous-public-read-access">blob container's access level to public</a>, or by <a href="https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview">generating a SAS Uri</a> for a specific blob that the browser can directly use.</p>
<p>However, you may want additional security and auditing around access to the files, and so you may prefer to create your own API that provides streaming access to the blob.</p>
<p>For very large files you'll want to support <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Range">"content ranges"</a> which will allow the user to jump around within the file during playback.</p>
<p>In this post, I'll show you how I created a very simple demo ASP.NET Core app to set this up, allowing me to play all the MP4 files within a specific Azure Blob Storage container using the <a href="https://videojs.com/">Video.js player</a>.</p>
<h3 id="registering-a-blobserviceclient">Registering a BlobServiceClient</h3>
<p>Our app will need to use the Azure Blob Storage SDK to access blob storage, so add a reference to the <a href="https://www.nuget.org/packages/Azure.Storage.Blobs">Azure.Storage.Blobs</a> and <a href="https://www.nuget.org/packages/Azure.Identity/">Azure.Identity</a> NuGet packages.</p>
<p>Then, in the startup, we'll register a <code>BlobServiceClient</code> as a singleton. I'm assuming that in your config, you have a setting for the storage account Uri (which will be something like "https://mystorageaccount.blob.core.windows.net/"), and I've also made "VisualStudioTenantId" configurable so I could run locally more easily (as I use different Azure tenants), but you may not need this.</p>
<pre><code class="language-cs">builder.Services.AddSingleton<BlobServiceClient>((serviceProvider) => {
var config = serviceProvider.GetRequiredService<IConfiguration>();
var storageAccountUri = config["StorageAccount:Uri"];
var accountUri = new Uri(storageAccountUri);
var azureCredentialOptions = new DefaultAzureCredentialOptions();
azureCredentialOptions.VisualStudioTenantId = config["VisualStudioTenantId"];
var credential = new DefaultAzureCredential(azureCredentialOptions);
var blobServiceClient = new BlobServiceClient(accountUri, credential);
return blobServiceClient;
});
</code></pre>
<h3 id="the-streaming-endpoint">The streaming endpoint</h3>
<p>Now we need the endpoint that can stream a blob. I've mapped this to <code>/stream</code> with a <code>video</code> query string parameter that has the full path of the desired video within the container (in a production scenario you'd probably use some kind of video id instead).</p>
<p>All we need to do is get a container client (I've made the container name configurable), and create a <code>BlobClient</code> for that blob. Then we call <code>OpenReadAsync</code> to get a .NET <code>Stream</code> of the blob contents. Note that you should not wrap that stream in a <code>using</code> statement as the stream will be kept open after your method returns.</p>
<p>The magic happens in the <a href="https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.http.results.stream?view=aspnetcore-7.0"><code>Results.Stream</code></a> method which we configure with <code>enableRangeProcessing</code> set to <code>true</code> to allow seeking within the stream.</p>
<pre><code class="language-cs">app.MapGet("/stream", async (BlobServiceClient blobServiceClient,
HttpRequest req, IConfiguration configuration) => {
var video = req.Query["video"];
var containerName = configuration["StorageAccount:ContainerName"];
var container = blobServiceClient.GetBlobContainerClient(containerName);
var blob = container.GetBlobClient(HttpUtility.UrlDecode(video));
// https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.http.results.stream?view=aspnetcore-7.0
var stream = await blob.OpenReadAsync(); // don't use a "using" statement here as the stream must live on
return Results.Stream(stream, "video/mp4", enableRangeProcessing: true);
});
</code></pre>
<h3 id="listing-all-files">Listing all files</h3>
<p>I wanted a web page to list all the MP4 files in the container, so made a simple Razor page that lists them out, with a link to a "watch" page for each video:</p>
<pre><code class="language-html"><ul>
@foreach (var video in Model.Videos)
{
<li><a asp-page="/watch" asp-route-video="@video">@video</a></li>
}
</ul>
</code></pre>
<p>The page model class simply uses a <code>BlobServiceClient</code> to find all the MP4 files in the container:</p>
<pre><code class="language-cs">public async Task OnGetAsync()
{
var containerName = configuration["StorageAccount:ContainerName"];
var container = blobServiceClient.GetBlobContainerClient(containerName);
var blobs = container.GetBlobsAsync();
await foreach(var blob in blobs)
{
if (blob.Name.EndsWith(".mp4"))
{
Videos.Add(blob.Name);
}
}
}
</code></pre>
<h3 id="playback-with-video.js">Playback with Video.js</h3>
<p>I chose <a href="https://videojs.com/">Video.js</a> for playback, so in my <code>_Layout.cshtml</code> I reference the video.js CSS and Javascript files from their CDN:</p>
<pre><code class="language-html"><!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>VideoJS test</title>
<link href="https://vjs.zencdn.net/8.3.0/video-js.css" rel="stylesheet" />
</head>
<body>
<h3>VideoJS Test</h3>
@RenderBody()
<script src="https://vjs.zencdn.net/8.3.0/video.min.js"></script>
</body>
</html>
</code></pre>
<p>And then finally on my <code>/watch</code> razor page, we fetch the video path from the <code>video</code> query string parameter, and then pass that on to the <code>/stream</code> endpoint as the <code>src</code> property for the video <code><source></code> element.</p>
<pre><code class="language-html">@page
@using VideoJS.Pages
@{var video = HttpContext.Request.Query["video"];}
<video
id="my-video"
class="video-js"
controls
preload="auto"
width="960"
poster="//vjs.zencdn.net/v/oceans.png"
data-setup="{}"
>
<source src="/stream?video=@video" type="video/mp4" />
<p class="vjs-no-js">
To view this video please enable JavaScript, and consider upgrading to a
web browser that
<a href="https://videojs.com/html5-video-support/" target="_blank"
>supports HTML5 video</a
>
</p>
</video>
</code></pre>
<h3 id="deploying-to-azure">Deploying to Azure</h3>
<p>To deploy my app, I used the <a href="https://learn.microsoft.com/en-us/cli/azure/">Azure CLI</a> to create an app service plan and web app:</p>
<pre><code class="language-powershell">$APP_SERVICE_PLAN = "myappserviceplan"
$SUBSCRIPTION = "bf038c01-7cdb-4682-80aa-2ddf24aae438"
$LOCATION = "North Europe"
$RESOURCE_GROUP = "myvideotest"
$APP_NAME = "videostreamtest"
az account set --subscription $SUBSCRIPTION
az group create --name $RESOURCE_GROUP --location $LOCATION
az appservice plan create --name $APP_SERVICE_PLAN `
--resource-group $RESOURCE_GROUP --location $LOCATION --sku B1
az webapp create --name $APP_NAME --resource-group $RESOURCE_GROUP `
--plan $APP_SERVICE_PLAN
dotnet publish -c Release
$publishFolder = "bin\Release\net7.0\publish"
# package web app to zip
$zipFilename = "$APP_NAME.zip"
Compress-Archive -Path "$publishFolder\*" -DestinationPath $zipFilename -Force
# publish zip to web app
az webapp deployment source config-zip --resource-group $RESOURCE_GROUP `
--name $APP_NAME --src $zipFilename
</code></pre>
<p>I wanted to use managed identities for securing access to my storage account, and so we need to ensure that the web app has a managed identity, and that it has been granted a role like "Storage Blob Data Contributor" for the target storage account:</p>
<pre><code class="language-powershell"># configure web app with a managed identity
az webapp identity assign --name $APP_NAME --resource-group $RESOURCE_GROUP
# get the managed identity id
$identityId = az webapp identity show --name $APP_NAME `
--resource-group $RESOURCE_GROUP --query principalId --output tsv
# create a role assignment for the managed identity
$STORAGE_ACCOUNT_SUBSCRIPTION = "adc5b04e-c6f2-4c89-9c6c-730bfcb1e9d7"
$STORAGE_ACCOUNT_RESOURCE_GROUP = "storage-res-group"
$STORAGE_ACCOUNT_NAME = "mystorageaccount"
az role assignment create --assignee $identityId `
--role "Storage Blob Data Contributor" `
--scope "/subscriptions/$STORAGE_ACCOUNT_SUBSCRIPTION/resourceGroups/$STORAGE_ACCOUNT_RESOURCE_GROUP/providers/Microsoft.Storage/storageAccounts/$STORAGE_ACCOUNT_NAME"
</code></pre>
<p>With that in place, you should be able to view the videos (assuming you've correctly set up your configuration to point at the storage account).</p>
<h3 id="summary">Summary</h3>
<p>The ASP.NET Core <code>Result.Stream</code> method makes it trivially easy to create a streaming endpoint that makes a video from blob storage available to a browser, and supports seeking within the stream.</p>
<p>Obviously you need to take performance into account. I had no problems playing high bitrate videos using this technique, but you are proxying a lot of data through your webapp, so at high volumes of users, you'll probably need to scale out to multiple web servers.</p>
<p>And this demo doesn't show the setup of authorization which is one of the motivators for doing this in the first place. For my demo app I actually configured <a href="https://learn.microsoft.com/en-us/azure/app-service/overview-authentication-authorization">"easy auth"</a> on my web app to only allow access to users from a particular Azure Active Directory tenant. But of course you can use anything that works with ASP.NET Core to authorize access to the videos, as well as auditing that access if required.</p>
https://markheath.net/post/variables-steps-azure-pipelinesPass variables between steps in Azure Pipelines2023-06-30T00:00:00Z2023-06-30T00:00:00ZMark Heathtest@example.com<p>I have recently been working with <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/get-started/what-is-azure-pipelines?view=azure-devops">Azure Pipelines</a> and found myself needing to pass a variable between "steps".</p>
<p>In theory, this should have been simple, but unfortunately, a lot of the documentation I followed assumed that you were <a href="https://medium.com/@badawekoo/passing-variables-between-jobs-in-azuredevops-pipelines-78fbccc41bc8">passing a variable between "jobs"</a>, which requires setting <code>isOutput=true</code>, and this didn't work for what I was trying to do.</p>
<p>If you're new to Azure Pipelines, this diagram shows the relationship between "stages", "jobs" and "steps":</p>
<p><img src="https://markheath.net/posts/2023/pipelines-stages-jobs-steps.svg" alt="Stages Jobs Steps" /></p>
<p>After a bit of trial and error I got something that works. In this example, my pipeline has two "steps". The first is a PowerShell task that uses <code>task.setvariable</code> to set a variable called <code>message</code> to a string I'd previously set up.</p>
<pre><code class="language-yaml">steps:
- task: PowerShell@2
inputs:
targetType: 'inline'
script: |
$myMessage="Hello World"
echo "##vso[task.setvariable variable=message]$myMessage";
displayName: 'Set variable'
</code></pre>
<p>And then the second step, we can use the <code>$(message)</code> syntax, or access it as an environment variable with <code>$env:message</code>. <em>(By the way, I also tried passing a variable from one step to the next by setting an environment variable, but that didn't seem to work).</em></p>
<pre><code class="language-yaml">- task: PowerShell@2
inputs:
targetType: 'inline'
script: |
echo "Message variable is $(message)."
echo "Message env variable is $env:message."
displayName: 'Read variable'
</code></pre>
<p>Hope that's helpful to someone.</p>
https://markheath.net/post/use-secrets-in-unit-testsUse secrets in unit tests2023-06-06T00:00:00Z2023-06-06T00:00:00ZMark Heathtest@example.com<p>Although the majority of "unit tests" should adhere to the <a href="https://learn.microsoft.com/en-us/dotnet/core/testing/unit-testing-best-practices#characteristics-of-a-good-unit-test">"FIRST" characteristics</a> and be "isolated" by mocking out their dependencies, there are occasions where it can be useful to actually connect to an external resource, allowing you to use a unit test runner to exercise your code against a "real" dependency.</p>
<p>I recently wanted to do this to for some code that accessed Azure Table Storage. The existing code was performing poorly and used an old SDK, and so I wanted to fully cover it with "integration tests" before upgrading to the new SDK and performance tuning it.</p>
<p>To do so, I needed access to a connection string, but didn't want to hard-code it as we all know that secrets don't belong in source control. Of course an obvious solution would be to access the connection string as an environment variable, but the <a href="https://learn.microsoft.com/en-us/visualstudio/test/run-unit-tests-with-test-explorer?view=vs-2022">Visual Studio Test Explorer</a> (as far as I can tell at least) offers no easy way to set environment variables for a test run.</p>
<p>Note: there is a <a href="https://learn.microsoft.com/en-us/visualstudio/test/configure-unit-tests-by-using-a-dot-runsettings-file?view=vs-2022"><code>.runsettings</code> file</a> but that suffers from exactly the same problem - you'd end up checking secrets into source control.</p>
<p>Fortunately, the <a href="https://learn.microsoft.com/en-us/aspnet/core/security/app-secrets?view=aspnetcore-7.0&tabs=windows">.NET user secrets</a> feature can help us out here. And although many articles about it assume you're using ASP.NET Core, you can use .NET user secrets in any project type, including an NUnit test assembly.</p>
<p>The steps are simple. First, in a command prompt in the directory of your unit test project, enter the <code>dotnet user-secrets init</code> command. This adds a <code>UserSecretsId</code> GUID into your <code>.csproj</code> file.</p>
<p>Then you can store your secret with the <code>dotnet user-secrets set</code> command. For example:</p>
<pre><code class="language-bat">dotnet user-secrets set MyConnectionString "Your connection string goes here"
</code></pre>
<p>Using the secret value within your unit test is very simple. Just use a <code>ConfigurationBuilder</code> and call <code>AddUserSecrets</code> using any class from your assembly as the generic argument. Then you can access secrets by name on the configuration instance, and you'll get a null value if the secret is not present.</p>
<pre><code class="language-cs">using Microsoft.Extensions.Configuration;
// ...
var config = new ConfigurationBuilder()
.AddUserSecrets<MyUnitTests>()
.Build();
var connectionString = config["MyConnectionString"];
</code></pre>
<p>Note that you will need the <a href="https://www.nuget.org/packages/Microsoft.Extensions.Configuration.UserSecrets"><code>Microsoft.Extensions.Configuration.UserSecrets</code> NuGet package</a> for this to work, which you can install with:</p>
<pre><code class="language-bat">dotnet add package Microsoft.Extensions.Configuration.UserSecrets
</code></pre>
<p>Anyway, although I'm sure many of my readers already know about user secrets, I wanted to document how to use them, as I still very often see secrets hard-coded in unit tests.</p>
<p><strong>Update:</strong> Thanks to a comment from Vladislav Antonyuk, I realised after posting this that I hadn't said anything about how you'd access secrets if you were running the tests in a CI/CD environment (as I was mainly focusing on the challenge of running them in the VS Test Explorer without committing secrets to source control). In a CI/CD situation I'd simply use environment variables. Just add an <code>.AddEnvironmentVariables()</code> call to the <code>ConfigurationBuilder</code>.</p>