DEV Community

Cover image for The newly announced future of .NET - unifying all the things
James Turner
James Turner

Posted on

The newly announced future of .NET - unifying all the things

Microsoft has just announced what the future of .NET will be after .NET Core 3. While won't be seen until late 2020, it does sound like a big step forward in the world of .NET.

You're not alone in wondering why the jump from 3 to 5 but their blog post has you covered:

We’re skipping the version 4 because it would confuse users that are familiar with the .NET Framework, which has been using the 4.x series for a long time. Additionally, we wanted to clearly communicate that .NET 5 is the future for the .NET platform.

What do they mean by unification? From the post:

  • Produce a single .NET runtime and framework that can be used everywhere and that has uniform runtime behaviors and developer experiences.
  • Expand the capabilities of .NET by taking the best of .NET Core, .NET Framework, Xamarin and Mono.
  • Build that product out of a single code-base that developers (Microsoft and the community) can work on and expand together and that improves all scenarios.

Additionally...

  • You will have more choice on runtime experiences (more on that below).
  • Java interoperability will be available on all platforms.
  • Objective-C and Swift interoperability will be supported on multiple operating
  • CoreFX will be extended to support static compilation of .NET (ahead-of-time – AOT), smaller footprints and support for more operating systems.

My biggest question so far with the announcement is what will happen to .NET Standard as this seems to supersede it in terms of standardising interaction with .NET.

What are your thoughts about this announcement? Any concerns? Anything that you find especially exciting?

Make sure to read Microsoft's full post to know all the details.

Top comments (46)

Collapse
 
turnerj profile image
James Turner • Edited

As an aside from the post, unifying/standardising .NET like this does make me think of a certain XKCD comic:

XKCD Standards Comic

Collapse
 
_cameronbrown profile image
Cameron Brown

I'd say the situation is fairly different with a monolithic entity in control of all versions here.

Collapse
 
jamesmh profile image
James Hickey

Sounds about right to me 👍

I think the statement being made implicitly by skipping v. 4 speaks clearly about the intentions with .NET Framework.

There have been many complaints about the mismatch between .NET Standard versions and .NET Core versions, so even if they decide to move .NET Standard at v. 5 (as silly as it seems...) I would be for it (but I doubt that will happen).

Either way, s'all good 👌

Collapse
 
bojana_dev profile image
Bojana Dejanović

So you are saying .NET Framework will stop at this 4.8 (current) version? They will stop development that is

Collapse
 
jamesmh profile image
James Hickey

Not that it will stop, but it won't be getting new features like .NET Core. It will just be bug fixes and crititical stuff. And it's not recommended by MS to start any new projects with it. etc.

devblogs.microsoft.com/dotnet/upda...

Thread Thread
 
bojana_dev profile image
Bojana Dejanović

Yes, I figured they will support it with bug fixes, patches etc. after all there is still a lot of codebase on old .NET Framework, so it makes sense. But in terms of new features they will not, as per article and comments above .NET 5 should unify everything and be the go to for w/ever platform you are targeting.

Thread Thread
 
bojana_dev profile image
Bojana Dejanović

From the pasted link:

If you have existing .NET Framework applications, you should not feel pressured to move to .NET Core. Both .NET Framework and .NET Core will move forward, and both will be fully supported, .NET Framework will always be a part of Windows. But moving forward they will contain somewhat different features. Even inside of Microsoft we have many large product lines that are based on .NET Framework and will remain on .NET Framework.

Collapse
 
turnerj profile image
James Turner

That is my understanding of it otherwise these version numbers are going to get all sorts of confusing!

I think that if they hit their goals for .NET 5, it might not actually be that difficult to migrate from .NET Framework to it due to the sheer number of APIs available.

There might be some limitations around the simplicity of porting something like WinForms (need to do more reading up on that) and I saw a tweet talking about WebForms not being supported.

Thread Thread
 
bojana_dev profile image
Bojana Dejanović

"Not that difficult" is always tricky :)
Ok, I guess that make sense, in regard to "traditional" .NET Framework. But a lot of enterprise bussineses rely heavy on technologies like WCF, etc. for which there is no suitable replacement in .NET Core. (not all APIs can be rewritten using Web API). So it will be interesting to see how this will be handled. For now it's far more safe to stick with old and proven .NET Framework.

Thread Thread
 
turnerj profile image
James Turner

Yeah, there is so much variability with enterprise stacks that makes it difficult to know with any certainty that it will or won't have issues. Whether that be something like WCF or using COM components.

From the cover image I took from the blog, other technologies like WPF, Windows Forms and UWP are planned to work for .NET 5 but I have no idea about WCF.

What got me excited with .NET Standard was the ability to target both from a library point-of-view without really needing to do extra work. I could see that opening the door to get large parts of enterprise workloads ready for a future .NET Core/.NET 5 world but yeah, you're still stuck with having things like WCF in .NET Framework.

Thread Thread
 
twigman08 profile image
Chad Smith

Microsoft has commented before on that, but can't remember where or what they said fully, but they did talk that they also as a company would be in the same boat about WCF as they are also have a lot of legacy applications using WCF.

Either way as a full-time .net developer I am excited to see the future of this. I definitely see a lot of movement in the right direction.

Thread Thread
 
bojana_dev profile image
Bojana Dejanović • Edited

While some of the low-level libraries needed by WCF have been ported to .NET Core and/or added to .NET Standard, no real progress has been made on WCF hosting itself. In March a separate thread titled Call for Decision: Server-Side WCF was created and tagged with the .NET 3.0 milestone.

More recently Immo Landwerth wrote, “We won’t bring [...] WCF hosting in the .NET Core 3.0 time frame but depending on customer feedback we’ll do it later.”

The source code for WCF is available on GitHub under the MIT open source license. So in theory a community sponsored version of WCF for .NET Core is feasible.

link

Thread Thread
 
themulti0 profile image
Multi

COM is supported in .NET Core so it will probably stay like that.
I am wondering how will they ship windows-specific features (WPF, WinForms, WCF) if at all, will it be with desktop packs? Or perhaps just included in the .NET 5 windows installer.
Questions like that are hanging in the air..

I personally hope .NET 5 will encourage / introduce a XAML cross platform powerful UI framework (Preferably WPF, or a new one?), that will be able to compete with the PWAs and Electron desktop "web" apps (that drain your memory usage).

Thread Thread
 
bojana_dev profile image
Bojana Dejanović

I am sorry to quote so many articles, but this one from Scott Hunter is interesting:

After .NET Core 3.0 we will not port any more features from .NET Framework. If you are a Web Forms developer and want to build a new application on .NET Core, we would recommend Blazor which provides the closest programming model. If you are a remoting or WCF developer and want to build a new application on .NET Core, we would recommend either ASP.NET Core Web APIs or gRPC (Google RPC, which provides cross platform and cross programming language contract based RPCs). If you are a Windows Workflow developer there is an open source port of Workflow to .NET Core

link

Thread Thread
 
jamesmh profile image
James Hickey

Just read that one too:

.NET Framework 4.8 will be the last major version of .NET Framework.

Thread Thread
 
bojana_dev profile image
Bojana Dejanović

Yes, everyone speculated for a long time about this, but this an official confirmation they are going to freeze development (besides bug fixes) on the .NET Framework. And with announcement of .NET 5 and WSL 2 clearly stating plans for the future :)

Collapse
 
mindplay profile image
Rasmus Schultz

So .NET Core becomes .NET 5 and basically usurps the remaining missing features from .NET Framework and becomes the new standard run-time.

I think that's a magnificent move!

With plans to also expand platform reach to Android and WASM, I feel like C# and .NET will finally become a strong alternative to many of the mainstream scripting languages that we basically treat like compiled languages anyhow.

My only major remaining reservation about the .NET platform is the same as it's always been for C# and most of today's mainstream languages: dependency on C.

Sadly, none of these languages are fast or memory efficient enough to stand alone - practically every popular language and platform (perhaps except for Go and Rust) currently outsource anything performance-critical to C, with most of those libraries being inaccessible to (and unmaintainable by) users of those languages.

Will C# and .NET ever be fast enough to, say, implement a low-level binary database protocol, implement a neural network, or load/save/resize a JPEG image?

This is why I keep wishing and longing for something like Skew or Vlang to take off.

C# is a brilliant language - but I really hope for something that's bootstrapped and doesn't outsource anything to another language; including the language itself.

Any community around a language would be much stronger and more unified if users of the language could safely and comfortably contribute to the compiler itself, the tools, and every available library for it.

I think that's what made Go a success.

I'm much more comfortable with C# and would love to see it some day succeed in the same way.

Collapse
 
kspeakman profile image
Kasey Speakman • Edited

Will C# and .NET ever be fast enough to, say, implement a low-level binary database protocol, implement a neural network, or load/save/resize a JPEG image?

Sure it is*, but a bigger question is: does compiled C# application perf block your dev work?

* Examples written in C#

RavenDB

... RavenDB can perform over 150,000 writes per second and 1 million reads on simple commodity hardware.

Another press release I found says that this is specifically single node performance.

EventStore

Whilst performance depends on configuration and use patterns, we’ve benchmarked Event Store at around 15,000 writes per second and 50,000 reads per second!

Pretty sure this was benchmarked on a laptop at a demo I watched, not a server with high perf SSDs.

Techempower Benchmarks - Notice aspcore

ML.NET - machine learning

Also realize that the JIT is capable of squeezing out extra perf, because it will further optimize heavily used methods at runtime. Whereas AOT doesn't have as much information to know which pieces of code should get deep optimization. So, .NET code should get pretty close to C/C++ after warmup, and perhaps faster in long running services. .NET 5 will include AOT compilation option if you want to get faster startup instead of faster long-term perf -- such as in functions-as-a-service.

In any case, use F# instead of C#. 😀 (if you want)

Collapse
 
turnerj profile image
James Turner

practically every popular language and platform (perhaps except for Go and Rust) currently outsource anything performance-critical to C

I might not be 100% aware of everything behind the scenes in C# but I don't believe that is actually the case for C#/.NET (at least it hasn't been for a while).

The Roslyn compiler for C# is self-hosted, meaning that it is actually written in C#. You also have libraries like ImageSharp which is a C# image processing library. Both of these things require fast operation with low memory overhead.

I was looking up information to back up why C# is fast and there are many articles that trade blows saying "C++ is faster", "C# is faster" or "Java is faster" (kidding on that last one) though because there was such a mixed bag of tests, opinions and explanations - I thought I would instead explain from another angle.

Regardless of your programming language, you can write slow code. In C, C++, C# etc, you can also write very performant code. Static compilers can make certain optimisations from the get-go and results are naturally crazy fast for being compiled to such a low level. JIT compilers can make optimisations on the fly and results can be finely tuned for any specific target machine.

I don't see a reason why someone couldn't write a fast low-level binary database protocol or a neural network in C# and be as fast and as memory efficient as their needs actually are.

Eventually the language won't be the bottleneck anymore but the device it is running on (CPU vs GPU etc).

Collapse
 
mindplay profile image
Rasmus Schultz

Since you mention ImageSharp, take a look at this article:

devblogs.microsoft.com/dotnet/net-...

There is a radical performance difference between ImageSharp and something like SkiaSharp, which outsources to C++.

People outsource this kind of work to C++ for good reasons - and languages like C# offer run-time interop with C and C++ binaries for some of the same reasons.

My main point isn't so much whether bytecode VM languages need to outsource - more the fact that they do. Languages like C#, JavaScript, PHP, Ruby, Python and so on, all have a bunch of run-time APIs that were written in C or C++, most of it inaccessible to the developers who code in those languages.

C# isn't much different in that regard. While, yes, the C# compiler itself was written in C#, the CLR and all the low-level types etc. are C code, mostly inaccessible to developers who work in C#.

You could argue that CLR isn't part of the C# language - but C# isn't really complete without the ability to run it, and I'd argue the same for any language that isn't fully bootstrapped and self-contained.

From what I've seen, VM-based languages (with JIT and garbage-collection and many other performance factors) aren't really suitable for very low-level stuff, which always gets outsourced.

Something common like loading/saving/resizing images is an excellent example of something that practically always gets outsourced - and whenever anybody shows up claiming they've built a "fast" native alternative, these typically turn out to be anywhere from 2 to 10 times slower, usually with much higher memory footprint.

From my point of view, languages like C#, JavaScript and PHP are all broadly in the same performance category: they're fast enough. But not fast enough to stand alone without outsourcing some of the heavy lifting to another language.

And yeah, maybe that's just the cost of high level languages. 🤷‍♂️

Thread Thread
 
turnerj profile image
James Turner • Edited

That article you've linked to is an interesting read and you're right about the CLR, just had a look at the CoreCLR project on GitHub and it is 2/3rds C# and 1/3rd C++ (seems to be the JIT and GC).

With the article and the ImageSharp comparison, I am curious to how much faster it would be now 2 years on with newer versions of .NET Core (there have been significant improvements) as well as any algorithmic updates that may have occurred - might see if I can run the test suite on my laptop. (EDIT: See bottom of comment for test results)

That C# isn't (in various tests and applications) as fast as C++ doesn't mean it always is or has to be that way. Like, it isn't that C++ code is executed on the processor, it is machine code. If C# and C++ generated the same machine code, arguably they should be identical in performance.

C# usually can't generate the same machine code due in part to overhead in protecting us from ourselves as well as things like GC etc. C# though does have the ability to work with pointers and operate in "unsafe" manners which may very well generate the exact same code.

All that said, I do expect though that it is naturally easier to write fast code in C++ than C# for that exact reason - you are always working with pointers and the very lowest level of value manipulation. The average C# developer likely would never use the various unsafe methods in C# to get close to the same level of access.

Just an aside, it seems like the tables of data and the graphs displayed in that article you linked to don't actually match each other. 🤷‍♂️


EDIT: I did run the tests from the article you linked me as they had it all configured up on GitHub. Short answer: Skia is still faster but the results are interesting.

Note 1: I disabled the tests of the other image libraries as I was only curious about ImageSharp.
Note 2: If you want to replicate it on your machine, you will need to change the Nuget package referenced for ImageSharp as it isn't available. I am using the latest beta version instead. This means one line needs to be altered in the tests as a property doesn't exist - you'll see if you compile it.

BenchmarkDotNet=v0.11.5, OS=Windows 10.0.17134.706 (1803/April2018Update/Redstone4)
Intel Core i7-6700HQ CPU 2.60GHz (Skylake), 1 CPU, 8 logical and 4 physical cores
Frequency=2531249 Hz, Resolution=395.0619 ns, Timer=TSC
.NET Core SDK=2.2.101
  [Host]            : .NET Core 2.2.0 (CoreCLR 4.6.27110.04, CoreFX 4.6.27110.04), 64bit RyuJIT
  .Net Core 2.2 CLI : .NET Core 2.2.0 (CoreCLR 4.6.27110.04, CoreFX 4.6.27110.04), 64bit RyuJIT

Job=.Net Core 2.2 CLI  Jit=RyuJit  Platform=X64  
Toolchain=.NET Core 2.2  IterationCount=5  WarmupCount=5  

|                                Method |     Mean |      Error |    StdDev | Ratio | RatioSD |     Gen 0 | Gen 1 | Gen 2 |  Allocated |
|-------------------------------------- |---------:|-----------:|----------:|------:|--------:|----------:|------:|------:|-----------:|
|   'System.Drawing Load, Resize, Save' | 594.9 ms | 113.475 ms | 29.469 ms |  1.00 |    0.00 |         - |     - |     - |   79.96 KB |
|       'ImageSharp Load, Resize, Save' | 318.8 ms |  10.496 ms |  2.726 ms |  0.54 |    0.02 |         - |     - |     - |  1337.7 KB |
| 'SkiaSharp Canvas Load, Resize, Save' | 273.5 ms |  10.264 ms |  2.665 ms |  0.46 |    0.02 | 1000.0000 |     - |     - | 4001.79 KB |
| 'SkiaSharp Bitmap Load, Resize, Save' | 269.1 ms |   6.619 ms |  1.719 ms |  0.45 |    0.02 | 1000.0000 |     - |     - | 3995.29 KB |

So SkiaSharp actually uses 3x the memory for the ~26% performance improvement.

Thread Thread
 
jimbobsquarepants profile image
James Jackson-South

The performance differences between the two libraries boil down to the performance of our jpeg decoder. Until .NET Core 3.0 the equivalent hardware intrinsics APIs simply haven't been available to C# to allow equivalent performance but that is all about to change.

devblogs.microsoft.com/dotnet/hard...

The ImageSharp decoder currently has to perform its IDCT (Inverse Discrete Cosine Transforms) operations using the generic Vector<T> struct and co from System.Numeric.Vectors, we have to move into single precision floating point in order to do so and then alter our results to make them less correct and more inline with the equivalent integral hardware intrinsics driven operations found in libjpeg-turbo (This is what Skia and others use to decode jpegs). This, of course takes more time to do and we want to do better.

With .NET Core 3.0 and beyond we will be finally be able to use the very same approach other libraries use and achieve equivalent performance with the added bonus of a less cryptic API and easier installation story.

Incidentally our Resize algorithms not only offer equivalent performance to Skia but also do a better job, yielding higher quality output and better handling edge cases that trip up other graphics APIs.

It's perfectly possible to write very high performance code with C# - Many great APIs already exist to help do so Span<T>, Memory<T> etc and there is a great focus at Microsoft currently to further improve the performance story.

Thread Thread
 
turnerj profile image
James Turner

I'm writing an article about maximising performance in .NET which starts off with some simple things but moves onto things like using Span<T>, unsafe operations or even the dense matrix you helped explain to me on Twitter.

When I saw that blog post about hardware intrinsics, that got me even more excited to share the big performance gains that can be had.

There definitely hasn't been a better time to be working with .NET!

Collapse
 
ernestasdob profile image
Ernestas • Edited

Exciting times. The .NET team is the best thing at Microsoft.

Now stop breaking laptops with every windows update, and make Azure DevOps usable, I would be a happy man.

Collapse
 
turnerj profile image
James Turner

I am liking Azure DevOps, it does seem to be getting better every time I open it but yeah, still some room for improvement.

What I would really like is a better Azure Portal. I find it so slow to load for many actions (in both page load times but also when saving a change, it seems like a multi-minute wait for some change to save). It also feels way more complex than it needs to be and doesn't seem to have "obvious" .NET Core support even though it is there (it talks about .NET Framework versions and you need to install an extension in an App Service to have .NET Core support). I could probably go on-and-on about that and I barely use much of the functionality of Azure.

Collapse
 
ernestasdob profile image
Ernestas

Maybe I'm stupid and I'm missing something obvious, but our workflow in devOps is completely ruined because there's no proper notifications. I come from Jira world, where doing an action on a task automatically puts you in a watch list for that task. And all the mentions or changes are seen in facebook-type notifications.

In DevOps, the only notifications are emails. And all of them have identical subjects. In a mess of all the task, builds, releases, PR's, test cases emails, no one actually looks at them.

20 people on the project literally stopped using any in-devOps communication, because of terrible notifications.

And yeah, I agree regarding Azure Itself. But to me that stuff while not always consistent, you figure it out and next time you know how to do it. Also a lot of Azure stuff is managed through code/api's, so you don't live in it. While Azure DevOps completely ruins the experience of managing the tasks and communicating.

Rant over.

Thread Thread
 
turnerj profile image
James Turner

Ahhhh, I see! I guess the reason I don't have as many issues with DevOps is simply because I haven't even used that functionality!

I'm also from the Jira world and if what you say is true, I can totally understand that being frustrating.

My experience with DevOps is more around repositories and pipelines. It is a bit fiddly testing pipeline configurations easily without constantly pushing commits of my YML file changes but various other little UI things have gotten better. Recently added self-hosted build agents which worked so smoothly too.

With build and release notifications though, I don't think I ever got one - I might have disabled that when I was setting up my account as so many things have frivolous default notifications enabled. I might need to pay more attention to what I am disabling!

Collapse
 
ssimontis profile image
Scott Simontis

I feel like there's too much useless fluff in the Azure DevOps UI and screen flows. GitLab is by far my favorite interface.

Collapse
 
themulti0 profile image
Multi • Edited

Microsoft had made a mistake when they haven't made .NET open source and cross platform, but the company was different back then.

When they introduced .NET Core I was impressed they drove the open source and cross platform ride and finally realized it's crucial in order to compete in every standard, but was skeptic about the reliability of it.

When .NET Standard was introduced, I was confused. So many different .NET versions, so many APIs and so much synchronization needed to be done by one who wants to target .NET Standard.

Today, I am confident Microsoft is going to use the benefits of all different mishmash of .NET frameworks to create an amazing, easy to deal with and truly cross platform framework.
Bravo!

 
ssimontis profile image
Scott Simontis

I wasn't going to chime in because my experience with Xamarin was back in mid-2015, but it sounds like not much has changed. My experience was that a lot of the libraries which were supposed to abstract away the platform-specific differences and provide universal access to OS features and device hardware did not work as advertised. In mid 2015, a lot of library functionality was still not considered Production grade, and I found that if you had a moderately complex use case, then you had no choice but to dig into the Android and iOS SDKs. I was essentially writing Java in C# at the end of the day.

My takeaway was that I did not gain any benefit or productivity advantage from using a language I am very comfortable with. My time would have been better spent learning how to work natively with Android. Especially now that Kotlin exists, I would much rather learn that and have access to all of the entire ecosystem of frameworks and libraries available.

I also encountered some bizarre bugs with the Xamarin tooling. One day, the code on my phone somehow got out of sync with the code on my computer, so debugging was completely broken. I was so confused why I was stepping through code and visiting every branch of an if statement, but I finally put it together and redeployed the code. Never have encountered a gem like that outside of Xamarin, even in my embedded SW days!

Collapse
 
yaser profile image
Yaser Al-Najjar

+1 for this...

With Xamarin, the amount of time you will spend to develop a simple iOS or Android app is gonna be x10 times that you would do in Java/Kotlin or Swift.

And for what? For nothing literally!

What you can do in Xamarin can be done in any native js framework (like vue or react).

Collapse
 
themulti0 profile image
Multi

I actually dislike the fact that PWAs / Electron apps are becoming the way to build cross platform apps.

For a while I thought Microsoft is planning to create a new cross platform, or make WPF cross platform or UWP (which is more limited and less chance of running on .NET 5 because of the Modern Windows runtime)

Then I understood that Microsoft embraces Xamarin.Forms and expects us to use it instead (and for mobile apps).
I also tried making a simple app but huge build times, over complication made me give up.

I think that right now is almost too late but just the right time to announce a cross platform XAML framework (Not Xamarin.Forms please) to run on .NET 5 and crush Electron!

Thread Thread
 
yaser profile image
Yaser Al-Najjar

What's the problem with Electron?

Why not embracing electron with .NET?

Thread Thread
 
ssimontis profile image
Scott Simontis

The most common complaint I hear against Electron is memory consumption. While I don't have any experience personally developing Electron apps, this has been my experience as a consumer...simple applications seem to need 125-150MB of memory minimum.

Thread Thread
 
themulti0 profile image
Multi • Edited

Main problem with it is the memory and CPU usage (chromium / any browser problem) which can get insane, in addition its a bit slow.
Second is that I prefer coding in C# than JS (TS is ok), as you said can be solved with combining them

Thread Thread
 
yaser profile image
Yaser Al-Najjar

@ssimontis , @themulti0

I can totally understand that memory usage might be an issue, but most developers are using VSCode daily with no problem... I think 200MB memory usage isn't a big problem today.

Aside from all that, building UI in HTML and CSS is really favorable in many ways.

Thread Thread
 
ssimontis profile image
Scott Simontis

I guess it comes down to the value the app adds...in the case of VS Code, it's my favorite text editor so I'm willing to pay the price. On my desktop and work laptops, I have enough memory that 95% of the time, it isn't an issue. But with my 2013 MacBook Pro, which has 8GB of RAM soldered in and can never be updated, this is a dealbreaker for most applications. A lot of tools built with Electron seem to be aimed at developers, who likely have pretty powerful machines, but this isn't a luxury that everyone has in today's world.

Thread Thread
 
yaser profile image
Yaser Al-Najjar

I can totally understand what you say... and you're right, I haven't encountered an app for normal users built with Electron 👌

Collapse
 
albru123 profile image
Albert Moravec

How I understand it there will be no .NET Standard, .NET Framework, .NET Core after .NET 5. And I think it's only natural and the development has been going this way for quite a few years since they announced Standard.

Another aspect that in my opinion led to it is the evolution of the C# language, because the .NET team stated that the development will only continue outside of the .NET Framework.

Collapse
 
themulti0 profile image
Multi

I agree that this is the right direction.
But .NET Core is not going anywhere, it will just be rebranded and upgraded to .NET 5, as Microsoft said: ".NET Core is the future of .NET".

Also Microsoft has not yet said what's going to be with .NET Standard. I think it should stay, because although .NET 5 will merge nearly all different .NET foundations together, except .NET Framework.. and developers still need a way to target both .NET Framework and .NET 5 in their libraries.. so yeah, until .NET Framework will be completely deprecated Microsoft can not drop the .NET Standard.

Collapse
 
turnerj profile image
James Turner

Wow, those are strong words! I never used Xamarin so I haven't experienced anything first hand like that.

What would one or two of the biggest issues specifically so I can try to understand how bad it is?

Collapse
 
frothandjava profile image
Scot McSweeney-Roberts

We’re skipping the version 4 because it would confuse users that are familiar with the .NET Framework

By that logic, it should be .Net 6, as mono is already on 5.20. Or just jump to 8 to sync with C#.

 
slavius profile image
Slavius

The problem comes when you need to use more electron apps concurrently. Open VS Code, Gitkraken (Git GUI), MS Teams, Slack, Tidal and all of a sudden you need 6-8GB of RAM which would typically need 600-800 MB in native. Gosh, MS Excel uses 135 MB of RAM when open and it has more features and value than all aforementioned apps together...

Collapse
 
jwp profile image
John Peters

Thanks James..
There was a time many of us went all in on MSFT. Back in 1990 and for 10 years, MFST was the ship to be on.
But then, their arrogance allowed them to totally miss the cell phone world, plus they split their own desktop into UWP and WPF. Windows 8 turned the desktop into a cellphone like layout and SYS32 was deemed dead. Then they killed Silverlight and denied WPF dead rumors for 10 years.

Bottom line... MSFT has great track record of throwing their adopters dev community over the sides of the shipin the middle of the sea. Don't trust them implicitly and realize they've lost the browser wars.

Collapse
 
zenulabidin profile image
Ali Sherief

Well we already know there are no graphics or window support in .NET core 3, .NET 5 probably won't change that but it would be nice if we could build such support based on GLFW, wouldn't it?

Collapse
 
turnerj profile image
James Turner

Yeah, graphics/window support in .NET would be great - especially as a cross-platform solution. I don't know what Microsoft has planned in that respect but I do imagine they would need to bulk up their .NET team to do such an endeavour.