DEV Community

Cover image for The Latest .NET 7 Features Will Change the Way You Code
ByteHide
ByteHide

Posted on • Edited on • Originally published at bytehide.com

The Latest .NET 7 Features Will Change the Way You Code

The .NET 7 framework has just released some amazing new features that will change the way you code. They haven't been officially released yet, but you can start playing with them and get ready for when it officially launched in november of this year!

The newest preview of the .NET Framework has been released to the public and it's full of surprises. Check out these Top 7 features in the .NET 7 release that will change the way you code forever!


Activity.Current New Standard

Currently in .NET 6, to achieve span context tracking of the different threads being managed, the most common is to use AsyncLocal<T> for this purpose.

According to Jeremy Likness in his Announcing .NET 7 Preview 4 post: 

"…with Activity becoming the standard to represent spans, as used by OpenTelemetry, it is impossible to set the value changed handler since the context is tracked via Activity.Current."

Now with Activity.CurrentChanged we will be able to achieve this to receive notifications. Let's see Microsoft example:

public partial class Activity : IDisposable
    {
        public static event EventHandler<ActivityChangedEventArgs>? CurrentChanged;
    }
Enter fullscreen mode Exit fullscreen mode

And this is how it would be used:

Activity.CurrentChanged += CurrentChanged;

    void CurrentChanged(object? sender, ActivityChangedEventArgs e)
    {
        Console.WriteLine($"Activity.Current value changed from Activity: {e.Previous.OperationName} to Activity: {e.Current.OperationName}");
    }
Enter fullscreen mode Exit fullscreen mode

📚 For more in-depth details, I recomend you to check the original proposal: Api handle Activity.Current value changes


Exposed Methods in performance-critical scenarios

The main problem this new feature solves, as Mikel Blanchard relates, is that performance tests show many allocations incurred when using enumeration interfaces.

This can now be solved by using exposed methods to enumerate properties with quick access to the elements and with no extra allocations. 

Let's see Microsoft example:

namespace System.Diagnostics
{
    partial class Activity
    {
        public Enumerator<KeyValuePair<string,object>> EnumerateTagObjects();
        public Enumerator<ActivityLink> EnumerateLinks();
        public Enumerator<ActivityEvent> EnumerateEvents();

        public struct Enumerator<T>
        {
            public readonly Enumerator<T> GetEnumerator();
            public readonly ref T Current;
            public bool MoveNext();
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

And this is how it would be used:

Activity a = new Activity("Root");

    a.SetTag("key1", "value1");
    a.SetTag("key2", "value2");

    foreach (ref readonly KeyValuePair<string, object?> tag in a.EnumerateTagObjects())
    {
        Console.WriteLine($"{tag.Key}, {tag.Value}");
    }
Enter fullscreen mode Exit fullscreen mode

📚 For more in-depth details, I recomend you to check the original proposal: System.Diagnostics.Activity: Enumeration API


Microseconds and Nanoseconds in date/time structures

The smallest time increment that could be used was the "tick" and its value is 100ns. The problem with this is that to determine a value in microseconds or nanoseconds you had to calculate everything based on the "tick" and this was not the most optimal thing in the world.

As Microsoft reports, they will now add microsecond and nanosecond values to the different date and time structures that exist.

Let's see Microsoft example:

With DateTime:

namespace System {
    public struct DateTime {
        public DateTime(int year, int month, int day, int hour, int minute, int second, int millisecond, int microsecond);
        public DateTime(int year, int month, int day, int hour, int minute, int second, int millisecond, int microsecond, System.DateTimeKind kind);
        public DateTime(int year, int month, int day, int hour, int minute, int second, int millisecond, int microsecond, System.Globalization.Calendar calendar);
        public int Microsecond { get; }
        public int Nanosecond { get; }
        public DateTime AddMicroseconds(double value);
    }
}
Enter fullscreen mode Exit fullscreen mode

With TimeOnly:

namespace System {
    public struct TimeOnly {
        public TimeOnly(int hour, int minute, int second, int millisecond, int microsecond);
        public int Microsecond { get; }
        public int Nanosecond { get; }
    }
}
Enter fullscreen mode Exit fullscreen mode

📚 For more in-depth details, I recomend you to check the original proposal: Add Microseconds and Nanoseconds to TimeStamp, DateTime, DateTimeOffset, and TimeOnly


One Memory Cache

Now you can instantiate a single memory cache with the AddMemoryCache API. In addition, you will be able to get it injected so you can call GetCurrentStatistics. Let's check Microsoft example:

// when using `services.AddMemoryCache(options => options.TrackStatistics = true);` to instantiate

    [EventSource(Name = "Microsoft-Extensions-Caching-Memory")]
    internal sealed class CachingEventSource : EventSource
    {
        public CachingEventSource(IMemoryCache memoryCache) { _memoryCache = memoryCache; }
        protected override void OnEventCommand(EventCommandEventArgs command)
        {
            if (command.Command == EventCommand.Enable)
            {
                if (_cacheHitsCounter == null)
                {
                    _cacheHitsCounter = new PollingCounter("cache-hits", this, () =>
                        _memoryCache.GetCurrentStatistics().CacheHits)
                    {
                        DisplayName = "Cache hits",
                    };
                }
            }
        }
    }
Enter fullscreen mode Exit fullscreen mode

In addition, Microsoft leaves us an example of how it would help us to see stats with the dotnet-counters tool (check it here):

Press p to pause, r to resume, q to quit.
    Status: Running

[System.Runtime]
    CPU Usage (%)                                      0
    Working Set (MB)                                  28
[Microsoft-Extensions-Caching-MemoryCache]
    cache-hits                                       269
Enter fullscreen mode Exit fullscreen mode

📚 For more in-depth details, I recomend you to check the original proposal: Let consumers of MemoryCache access metrics


Multiple Memory Cache

As in the previous feature, which allowed instantiating a single cache memory, we can also instantiate multiple memory cache with GetCurrentStatistics . Let's check this Microsoft example:

static Meter s_meter = new Meter("Microsoft.Extensions.Caching.Memory.MemoryCache", "1.0.0");
static IMemoryCache? mc1;
static IMemoryCache? mc2;

static void Main(string[] args)
{
   s_meter.CreateObservableGauge<long>("cache-hits", GetCacheHits);
   mc1 = new MemoryCache(new MemoryCacheOptions() { TrackStatistics = true, SizeLimit = 30 });
   mc2 = new MemoryCache(new MemoryCacheOptions() { TrackStatistics = true, SizeLimit = 30 });

   // call to: mc1.TryGetValue(key1, out object? value)
   // or: mc2.TryGetValue(key2, out value2)
   // increments TotalHits
}

// metrics callback for cache hits
static IEnumerable<Measurement<long>> GetCacheHits()
{
   return new Measurement<long>[]
   {
      new Measurement<long>(mc1!.GetCurrentStatistics()!.TotalHits, new KeyValuePair<string,object?>("CacheName", "mc1")),
      new Measurement<long>(mc2!.GetCurrentStatistics()!.TotalHits, new KeyValuePair<string,object?>("CacheName", "mc2")),
   };
}
Enter fullscreen mode Exit fullscreen mode

And also, as in the previous feature, Microsoft shows us that we can also measure stats with the dotnet-counters tool (https://docs.microsoft.com/en-us/dotnet/core/diagnostics/dotnet-counters):

 

Press p to pause, r to resume, q to quit.
    Status: Running

[System.Runtime]
    CPU Usage (%)                                      0
    Working Set (MB)                                  14
[Microsoft.Extensions.Caching.Memory.MemoryCache]
    cache-hits
        CacheName=mc1                             13,204
        CacheName=mc2                             13,204
Enter fullscreen mode Exit fullscreen mode

📚 For more in-depth details, I recomend you to check the original proposal: Let consumers of MemoryCache access metrics


New Tar APIs

We will now have cross-platform APIS with which we can extract and modify (read and write) tar archives. As usual, Microsoft has shown examples so let's take a look at some of them:

Archive

// Generates a tar archive where all the entry names are prefixed by the root directory 'SourceDirectory'
TarFile.CreateFromDirectory(sourceDirectoryName: "/home/dotnet/SourceDirectory/", destinationFileName: "/home/dotnet/destination.tar", includeBaseDirectory: true);
Enter fullscreen mode Exit fullscreen mode

Extract

// Extracts the contents of a tar archive into the specified directory, but avoids overwriting anything found inside
TarFile.ExtractToDirectory(sourceFileName: "/home/dotnet/destination.tar", destinationDirectoryName: "/home/dotnet/DestinationDirectory/", overwriteFiles: false);
Enter fullscreen mode Exit fullscreen mode

📚 For more in-depth details, I recomend you to check the original proposal: Implement Tar APIs


OSR (On Stack Replacement)

OSR (On Stack Replacement) is a great complement to tiered compilation. It allows, in the middle of the execution of a method, to change the code that is being executed by the methods that are being executed at the moment.

According to Microsoft:

"OSR allows long-running methods to switch to more optimized versions mid-execution, so the runtime can jit all methods quickly at first and then transition to more optimized versions when those methods are called frequently (via tiered compilation) or have long-running loops (via OSR)."

With OSR, we can obtain up to 25% extra speed at start-up (Avalonia IL Spy test) and according to TechEmpower, improvements can range from 10% to 30%.

Performance Impact (Source: Microsoft)

Performance Impact (Source: Microsoft)📚 If you want to know in how OSR works, please refer: OSR Document

Top comments (0)