You're getting this backward. C did get classes, and that is called C++. Early versions of C++ were largely based on C.
Perhaps the right question here would be, why do people still use C? Part of the answer is, they don't. But then again they do, and in some places C remains relevant as a much smaller, much less tortured high level language that's also close to the hardware; so you find C-like dialects in GPU compute, Arduino and other similar tech.
There's a couple of answers here. One is you may assert C++ is a superset of C without rousing (many) programmers, another is that C# is more of an IL/CLR language - or perhaps a bastard sibling of Java - and in other words, a "bytecode language". Now that puts it in another ball-park, perhaps even another ball-kim.
Two perspectives here. One is interoperability. Bytecodes are virtual machine code so you compile to bytecode on one computer, and distribute the compiled product across several OSes (win/mac/nix) or architectures (such as x86 or ppc64) (Java's once famed "write once run anywhere"). Another is how much the runtime knows about your program while it is running. In Java or C# you can ask a class what it is. You can create a class and methods at runtime and more generally your program is "self-aware" (via 'reflection'). This flexibility still has a cost ( < performance ).
At runtime C and C++ know almost nothing about the program being run, so there is no "C++ runtime" per se.
Wow. You need a tutor : P ?
-- Anyways. Note some of these properties are hard set, others are not. You cannot run compiled C/C++ across another architecture (or only via emulation of the target processor/architecture) but you could (at least in theory) bake information about classes inside compiled code; in practice compiled languages don't often take this approach - what they do is leverage every opportunity to optimize your code for the target architecture.
You're getting this backward. C did get classes, and that is called C++. Early versions of C++ were largely based on C.
Perhaps the right question here would be, why do people still use C? Part of the answer is, they don't. But then again they do, and in some places C remains relevant as a much smaller, much less tortured high level language that's also close to the hardware; so you find C-like dialects in GPU compute, Arduino and other similar tech.
So, if C++ is C with classes, then what in the hell warranted C# to be its own language?
There's a couple of answers here. One is you may assert C++ is a superset of C without rousing (many) programmers, another is that C# is more of an IL/CLR language - or perhaps a bastard sibling of Java - and in other words, a "bytecode language". Now that puts it in another ball-park, perhaps even another ball-kim.
What's so special about a "bytecode language"?
Two perspectives here. One is interoperability. Bytecodes are virtual machine code so you compile to bytecode on one computer, and distribute the compiled product across several OSes (win/mac/nix) or architectures (such as x86 or ppc64) (Java's once famed "write once run anywhere"). Another is how much the runtime knows about your program while it is running. In Java or C# you can ask a class what it is. You can create a class and methods at runtime and more generally your program is "self-aware" (via 'reflection'). This flexibility still has a cost ( < performance ).
At runtime C and C++ know almost nothing about the program being run, so there is no "C++ runtime" per se.
Wow. You need a tutor : P ?
-- Anyways. Note some of these properties are hard set, others are not. You cannot run compiled C/C++ across another architecture (or only via emulation of the target processor/architecture) but you could (at least in theory) bake information about classes inside compiled code; in practice compiled languages don't often take this approach - what they do is leverage every opportunity to optimize your code for the target architecture.
Okay, the interoperability one is what I thought.
Thanks for your explanation.