loading...

Java is Unsound: The Industry Perspective

rosstate profile image Ross Tate ・9 min read

Recently Nada Amin and I discovered that Java and Scala are unsound. We submitted the finding and related discussion to OOPSLA, an academic conference on object-oriented programming. It has been great to see the paper making rounds in industrial circles, but there also seems to be a lot of confusion and I think that's our fault. The paper was written for academics, and so it doesn't discuss the issue from industry's perspective. I'm crunching on a deadline right now, so let me write an abridged version of the industry-targeted paper here, focusing just on Java.

What Does “Unsound Mean?

Most type systems aim to provide some sort of guarantee about how a well-type program will behave. For example, in many languages all memory accesses are “safe in well-typed programs. A type system is sound if it actually succeeds at providing that guarantee. Thus informally a type system is sound if it ensures what its designers intended it to ensure. This is much like programming: a program is correct if it does what it is supposed to do.

Java's type system is intended to ensure that if a method asks for an Integer, then it will only ever be given Integers, never Strings. Thus the following pseudo-code is rejected by the type system:

public Integer increment(Integer i) { return i + 1; }
String countdown = increment("98765432");

Sure, JavaScript can successfully execute this program despite its nonsensical nature, but Java is implemented very differently, and allowing this code could really mess up the heap (or even stack if you've ever enjoyed some C/C++ memory mismanagement errors).

Now you might already know that Java's type system has some unsound qualities. For example, for historical reasons Java has covariant arrays, making this code type check:

String[] strs = { "NaN" };
Object[] objs = strs;
objs[0] = 1;
String one = strs[0];

And when generics were added, Java used “raw types so that genericized code could be compatible with pre-generics code. Consequently the following type checks:

List<Integer> ints = Arrays.asList(1);
List raw = ints;
List<String> strs = raw;
String one = strs.get(0);

Thus both of these examples seem like they allow one to contain an Integer despite the fact that it is declared to have type String.

Now this brings us back to intent. The designers intended the type system to allow this behavior. This is because, although these examples are clearly bad, there are many examples where this permissiveness is useful. After all, untyped languages are all about allowing bad things to happen so that more cool things can happen as well. And because this behavior was intentional, the designers planned for it. Every time you assign a value into an array (of a reference type), the runtime looks up the array value's actual element type and throws an ArrayStoreException if it's incompatible. So in the above sample, the assignment objs[0] = 1; fails before the seemingly unsound assignment String one = strs[0]; can be reached. Similarly, whenever get is called on a List<String>, the runtime checks if the returned value is actually a String in case a raw type was misused, throwing a ClassCastException otherwise.

So Java's type system seems to be as sound as it was intended to be, though maybe not as sound as you would like it to be. And since generics were introduced, it was believed to be “truly sound so long as you didn't use these intentional backdoors.

“True Unsoundness

Here's our “true unsoundness example that has been making the rounds most recently. It assigns an Integer instance to a String variable. According to the Java specification, this program is well-typed, and according to the Java specification it should execute without exception (unlike the aforementioned backdoors) and actually assign the Integer instance to the String variable, violating the intended behavior of well-typed programs. Thus it illustrates that Java's type system is unsound.

Now you, like many others, might be confused by this example and might have already done some things to make yourself even more confused. So let me first name each common point of confusion:

  1. People don't understand why this type checks or even actively believe it should not type check.
  2. If you copy this into your own Java compiler at hand, the compiler might say the program does not type check, contradicting my claim.
  3. If your compiler does type check it and then you run it, you get a ClassCastException, also contradicting my claim.

You also wonder why this matters, especially since you and your colleagues are unlikely to ever write code like this. I'll address all of these concerns, though out of order.

“But It Doesn't Compile”

If your compiler doesn't type-check our example, I have news for you. No, your compiler isn't catching the bug. In fact, your compiler itself has a bug. Your compiler is broken. The Java specification says this example should type check and compile. Your compiler is supposed to implement that specification, and it's failing here. It's failing because it didn't anticipate the corner case we created here.

“But It's Just a Corner Case”

How many bugs have you dealt with have been corner cases? A good programmer is supposed to anticipate and handle corner cases, not just excuse a bug because it is hard to create. Sure, sometimes resources are constrained and you might decide that the bug is low priority, but you can only assess its priority after you identify it. So even if this bug turns out to be low priority, which I'll get to in a bit, it's still important to know it exists.

“But an Exception is Thrown”

And thank god one is, and purely due to some good luck! Remember how a call to get on a List<String> secretly checks that the returned value is actually a String? This secret check is done for backwards compatibility with raw types. It shouldn't be necessary when there are no raw types in the program. But thanks to it, a ClassCastException is always thrown when our example is run (if your compiler is smart enough to compile it).

So had history been a little different, either with Sun deciding to throw out backwards compatibility after adding generics, or with Sun adding generics into the JVM, or with Java first being released with generics, then this exception would not be thrown. Instead I could cast anything into an int[] and get direct access to the raw bytes of many objects, which I could then use to inject arbitrary code (using return-oriented programming to bypass security measures).

“But No One Would Ever Write This”

You probably would never write this, and your colleagues would never write this. That reasoning is good to apply in many situations. I in fact do this all the time; it's a huge part of my research agenda. But you have to be careful where you apply it. Soundness is a security guarantee, not a usability concern. What matters is if someone can write this, because then that someone can get around the security measures that people have placed their confidence in and do whatever they would like to do. In the case of soundness, it's the malicious programmer you should be worried, not just you and your friends.

“But The Code Shouldn't Type-Check”

Many people call out the type Constrain<U,? super T> and say it should be invalid because Constrain requires its second type argument to be a subtype of its first, butT, and every supertype of T, is not a subtype of U. But this reasoning confuses invalidity and uninstantiability. It's actually faulty reasoning that's been around in both industry and academia for at least as long as I've been out of high school (which has been a while).

Here's a simpler version of the fallacy:

class Foo<T extends Number> {
    Number foo(T t) { return t; }
}

Is it safe for Foo<String> to be a valid type? For example, is the following code safe:

Number bar(Foo<String> foos) { return foos.foo("NaN"); }

The conventional wisdom up until now, and probably your gut instinct, would say this is unsafe, since it turns a String into a Number, but the fact is that it's completely safe. Here's an equivalent program that illustrates why:

interface IFoo<T> {
    Number foo(T t);
}
class Foo<T extends Number> implements IFoo<T> {
    public Number foo(T t) { return t; }
}
Number bar(IFoo<String> foos) { return foos.foo("NaN"); }

For any class with a constrained type parameter, I can just make a corresponding interface without that constraint and use that throughout the program. This will behave identically to the original program. (There are some clarifications to this, but I'll leave them to the paper.)

So why does it seem like Foo<String> should be invalid? How is it bar safe? Well, the answer to both is the same: Foo<String> is uninstantiable. You'll never be able to create an instance of Foo<String> because String is not a subtype of Number. Thus the seemingly unsafe invocation of foo in bar will never happen because you can't invoke a method of an object that can never exist.

So by this same reasoning it's perfectly safe for Constraint<U,? super T> to be valid. You'll only ever be able to have an instance of it if T is a subtype of U at run time, satisfying the constraint.

But Constrain<U,? super T> is inhabitable in Java, because null inhabits every reference type. So keep that in mind as I explain why this program type checks.

Wildcard Capture

A Java wildcard, ?, is not a type. It is simply a piece of syntax representing something unknown. And different wildcards can represent different unknown types. So to reason about wildcards, Java uses a process called wildcard capture.

Suppose we have an expression e of type Constrain<U,? super T>. There is a wildcard in the type of this expression. The type-checker doesn't know what it represents, but it does represent something, so the type-checker gives that unknown something a temporary name, say X. Furthermore, the type-checker knows that the unknown something must at least be a supertype of T. Thus the type-checker views e as having type Constrain&llt;U,X> where X is some supertype of T.

But the type-checker knows something more. It knows that X must be a valid type argument. Consequently, the type-checker also notes that X must be a subtype of U. We call this the implicit constraint on the wildcard, whereas super T is the explicit constraint on the wildcard. I found out that, although uncommon, this implicit constraints are in fact used for type-checking a number of Java libraries. The paper gives a specific example showing why implicit constraints are important.

Now look at the example again. In the call to bind.upcast(constrain, t), the type-checker does the aforementioned wildcard-capture process and views constrain as having Constrain<U,X> where X is (explicitly) a supertype of T and (implicitly) a subtype of U. Consequently it infers X to be the type argument to bind.upcast and uses its explicit and implicit constraints to type-check the rest of the invocation.

Thanks, Null!

Okay, so if the wonky type isn't the source of unsoundness, what is? Well, amazingly enough, it turns out null pointers don't just cause bugs in programs, they cause bugs in type systems too! The reasoning for wildcard capture completely forgets about null pointers. It assumes there must be some actual Constrain<U,X> for some X, an assumption that manifests itself in the implicit constraint on X. So all of this boils down to a null-pointer bug. But unlike most null-pointer bugs, this one took 12 years to discover. Oh, and it's also not nearly as easy to fix as most null-pointer bugs, since every feature involved is used in practice. But the Java team is working on fixing it. And before you once more insist that the solution is to disallow the Constrain<U,? super T> type, realize that that kind of reasoning is what delayed core Scala's proof of soundness (without null) for a whole decade, and I'm not particularly eager to spend a decade solving this.

P.S.

Future language designers, please keep this in mind! Especially if you're not compiling to the JVM. Java was lucky it was held back by backwards compatibility; you may not be so lucky.

Posted on by:

rosstate profile

Ross Tate

@rosstate

Assistant Professor of Computer Science at Cornell University and Advisor to the Ceylon and Kotlin teams at Red Hat and Jet Brains.

Discussion

markdown guide
 

This is a great post but did you really have to ruin it underlying some poor JavaScript knowledge?

const increment = i => i + 1;
increment("98765432");

That's a "987654321" because types in JS matters too.

Accordingly, that sentence fired back already:

"Sure, in JavaScript this might make sense, but Java is implemented very differently"

No, JS developers aren't that stupid or anti-academic, so thanks for dropping any reference to a programming language that has nothing to do with this report, and/or your findings, about Java.

From a JS, Python, PHP, C, C#, and Java developer.

 

Hi Andrea!

I know that's what it does in JavaScript. That's the point; I'm contrasting with JavaScript. JavaScript's implementation is designed to be safe without static types; consequently it heavily uses run-time type information to determine if an operation is safe and how it should execute. Java, however, relies on static type information to make run-time lookups more efficient (e.g. virtual-method tables and interface-method tables). Consequently, while this is safe in JavaScript and has a sensible semantics, in Java it would result in nonsense and even mangle memory (well, in more complex cases, not this one).

So sorry this came off as deriding JavaScript. It was meant as a positive: JavaScript handles this sensibly, whereas Java fails miserably (and for good reason in both cases).

 

Hi Ross,
we're not quite on the same line here.

"while this is safe in JavaScript and has a sensible semantics, in Java it would result in nonsense"

That's the thing, it's about the time the industry understand JavaScript is not just a toy driven by kids.

Nobody in the JS developers world I know would ever write that side effecting monstrosity that clearly denote whoever wrote that function had no idea what she/he was doing.

So yes, that example is nonsense in JS too.

JIT also would deopt from a function invoked with randomly typed arguments, so that everything that can be optimised upfront from VM side is vane .... because Java chaps still think JS is all about not caring about types?

This is yet another misunderstanding of what JavaScript is, how it is used, and what are developers common-sense practices to not abuse its coercion-based type system that can produce side effects.

As summary: is it really necessary to bring in poor, and unrealistic, JavaScript usage in this post? Do you really need JS to prove your point in here? I don't think so.

People bought domains to drop the useless comparison and the misunderstanding, that reading in 2017 finally somebody demonstrated Java ain't sound so it's actually not even superior to other PLs, but it has to underline a non-real-world deriding case of another language, feels a bit lame.

Use Python, PHP, use whatever else you want, not its misunderstood rival for the last 25 years please.

Thank you!

javascript.crockford.com/javascrip...
javascriptisnotjava.io

But the behavior of the above is well defined for JavaScript. But it shouldn't happen in Java, and creates behavior that contradicts the intent of the language designers.

None of the code in this article is something you would expect in production.

The intent of the Java code is to sum num + 1 and there is an allusion that's what Javascript magically is going to do since the intent, expecting a number as input and a number as output, is clear to anyone.

Saying that passing a string is fine on JS side is insulting all JavaScript developers that perfectly understand the intent of that function and would never pass a bloody string to it.

Is it really that hard to understand?

Ooohhh, I think I get what you're saying. Sorry, I was so focused on the example that I didn't carefully consider the words around it. I changed the phrasing to make my intent clearer. Can you let me know if the new version addresses the concern? If so I'll update the other renditions of the article.

It's better ... but ... TypeScript, as well as Flowtypes, allow types in JS via tooling that would never let you write the following code:

function increment(i:Number):Number {
  return i + 1;
}
// double error, the inpout is wrong, expected result too
const countdown:String = increment("98765432");

The JS env and surrounding tooling got better since 90's so once again,
is it really necessary to include at all in this document, an outdated, misleading, inappropriate, abuse of a scripting programming language that already has been historically confused, or wrongly compared, to Java ?

What I am saying is that the correct replacemente for this sentence:

"Sure, JavaScript can successfully execute this program despite its nonsensical nature,"

would probably be like:

"Sure, a version of the code that wouldn't have types specified in it, used by a rookie developer that wouldn't care about JavaScript primitive types and their coercion possible side effects, could successfully execute this program despite its nonsensical nature and wrong expectations in terms of returned value ..."

Do you see this is not going to really add anything to this document?

Do you see there's no point at all to even name JS in a scenario that nobody would be since year 2000 ?

Anyway, thanks for, at least, trying to understand what I meant.

Best Regards

If I felt like this example wasn't necessary, I would remove it. But the problem is that it's in there because people were getting confused about specifically the issues I discuss there, namely that soundness is about the interplay between the type system and the semantics. Covariant arrays, raw types, and JavaScript were each explicitly mentioned by various people, so I discussed each one.

Moving forward, it sounds like, although not perfect, I've at least managed to make things a little better. Ideally I could provide the fully clarified explanation, but attention spans are limited, so I'll leave it at this compromise.

Thanks for helping me understand your perspective better, even though it took me a while to at least get a little close!

The juxtaposition seemed appropriate and certainly didn't appear to be an affront to javascript or javascript developers. I'm thoroughly confused by the strong reaction to it.

Oh, Andrea, I forgot to say, that website you linked to is amazing! Java is to JavaScript as ham is to hamster. Fantastic! I'll keep that site in mind next time a friend gets confused.

 

Nice catch. Please put a comma before 'and T' or even better use 'while' or the like in the following sentence, because it is very confusing.

Many people out the type Constrain and say it should be invalid because Constrain requires its second type argument to be a subtype of its first and T, and every supertype of T, is not a subtype of U.

 

Yuck, that sentence is messy! Thanks for letting me know. I fixed it, or at least I fixed it a bit.

 

Similar to an earlier comment, I noticed the statement "whenever get is called on a List, the runtime checks if the returned value is actually a String" seems to be incorrect. The compiler and runtime will happily let you do something like this:

Integer myInt = (Integer)(Object)strs.get(0);

So the runtime did not enforce that the object returned from "get" was actually a string. The ClassCastException only occurs if you try to assign the value to a String.

 

Right. I simplified things for pedagogical purposes. The compiler inserts casts in order to make the resulting bytecode well typed. Since typically the result of strs.get would be used as a String, typically the cast is inserted. But examples such as yours show that this is not always done.

 

Minor nitpick: "Similarly, whenever get is called on a List, the runtime checks if the returned value is actually a String in case a raw type was misused, throwing a ClassCastException otherwise." Yes, in the code given there will be a ClassCastException, but not because the runtime system checks the return value of the get(), but rather because the runtime system checks the value of the assignment against the type of the variable to which it is being assigned. Because of type erasure, the runtime system doesn't know that the list is only supposed to hold strings, but it does know that the variable "one" can only hold a string and sees that the value being assigned is not one.

 

Sorry, I was being imprecise. What happens is actually somewhere in between. The bytecode has to be well-typed; you can't assign an object to a String variable (i.e. slot on the stack). So the runtime will never even execute the code if such an ill-typed assignment is in there. Consequently, the compiler inserts casts into the bytecode to make it well-typed. The runtime doesn't know why those casts are there, just like it doesn't know the list is supposed to be a List<String>, it just executes them because it was told to. This seemed like too much detail to go into, so I just sloppily called everything the runtime, heheh.

 

First of all, I certainly appreciate your work. It's never safe to assume that something works without having some proof to back that up.

Regarding the correctness of the compiler on the first example, I will point you to the Eclipse bug report I filed where Stephan Herrmann explains it why this should not compile according to the JLS: bugs.eclipse.org/bugs/show_bug.cgi... It seems like there are multiple versions of the JLS in use by the different compilers, and that is the one of the reasons for the discrepancies.

I like the last example with the use of an Interface to hide the broken implementation, very clever. Just goes to show that you need to verify that your 3rd party libraries don't have bugs.

 

I'm glad you liked it! And note that even a proof doesn't ensure you're safe. There are actually multiple proofs that Java's type system is sound. The problem is they don't handle all of Java (can you imagine how many years that would take to do?), and for historical reasons they happen to not handle null. Consequently, all the proofs were correct, but they failed to model Java correctly and missed this bug. Validation vs. Verification, if you know the software-engineering terms.

As for the bug, you made me realize I wrote this version of the paper a little incorrectly. In the original paper, I made sure to emphasize that the type-checker was not to blame. The reason is that, even though the type-checker is rejecting some valid Java code, that's necessarily going to happen. Type-checking Java is undecidable, and so it's impossible for a compiler to be always right. Our examples happen to fit into the space where some compilers succeed and some compilers fail. So in fact the program should type check, but it seems harsh to blame a compiler for occasionally failing to do the impossible, especially when the JLS's suggested algorithm fails to do so as you have found!

 

As mentioned by many already, this is a great post and really helps clarify the issue both technically and practically. I actually tried the code you posted and found that Java7 (1.7.0_67) compiled the program and ran to the expected runtime exception. When I tried Java8 (1.8.0_77), however, the program failed to compile and flagged the "return bind.upcast(constrain, t);" line with this error:
The method upcast(Unsound.Constrain, B) in the type Unsound.Bind is not applicable for the arguments
(Unsound.Constrain, T)

Does that mean that Java8 is more sound, or that the compiler is broken?

 

The compiler is broken. Also note that the paper has other examples that you should try. So far we've been able to get every compiler to (correctly) compile unsound code. Even the future Java compiler!

 

Amazing article. I wish every research paper was accompanied by something similar.

 

Does c# suffer these kinds of security related issues? Does it have it's own kind of different issues ? If so can you elaborate on them?

 

I don't think it does. The subtitle of the paper is "The Existential Crisis of Null Pointers" because there seem to be two key requirements for a language to exhibit this problem:

  1. Something that represents existential quantification strong enough that it can introduce new evidence. So for Java, the type Constrain<U,? super T> is essentially the type "there exists some type X that is a supertype of T and a subtype of U", which in turn implies T is a subtype of U if it's inhabitable.

  2. Implicit null pointers, meaning a null value that inhabits most types, or at least the ones used to form the existential quantification.

C# satisfies #2 but not #1. It doesn't really have any form existential quantification, and certainly not anything strong enough to introduce new evidence. Thus I don't think there's a way to use null to trick the type-checker into thinking two unrelated types are related.

 
 

Thanks for share man. I will like a article.
istikbal mobilya fiyatları

Rüya tabirleri için rüyada para görmek