The Checker Framework is an innovative programming tool that helps you prevent bugs at development time, before they escape to production.
Java's type system prevents some bugs, such as
int count =
"hello";. However, it does not prevent other bugs, such as null
pointer dereferences, concurrency errors, disclosure of private
information, incorrect internationalization, out-of-bounds indices, and
so forth. Pluggable type-checking replaces a
programming language's built-in type system with a more powerful,
We have created around 20 new type systems, and other people have created many more. The more powerful type system is not just a bug-finding tool: it is a verification tool that gives a guarantee that no errors (of certain types) exist in your program. Even though it is powerful, it is easy to use. It follows the standard typing rules that programmers already know, and it fits into their workflow.
The Checker Framework is popular: it is used daily at Google, Amazon, Uber, on Wall Street, and in other companies from big to small. It it attractive to programmers who care about their craft and the quality of their code. The Checker Framework is the motivation for Java's type annotations feature. It has received multiple awards. With this widespread use, there is a need for people to help with the project: everything from bug fixes, to new features, to case studies, to integration with other tools. We welcome your contribution!
Why should you join this project? It's popular, so you will have an impact. It makes code more robust and secure, which is a socially important purpose. Past GSOC students have had great success. (David Lazar became a graduate student at MIT; multiple students have published papers in scientific conferences.) You will get to scratch your own itch by creating tools that solve problems that frustrate you. And, we have a lot of fun on this project!
Prerequisites: You should be very comfortable with the Java programming language and its type system. You should know how a type system helps you and where it can hinder you. You should be willing to dive into and understand a moderately-sized codebase. You should understand fundamental object-oriented programming concepts, such as behavioral subtyping: subtyping theory permits argument types to change contravariantly (even though Java forbids it for reasons related to overloading), whereas return types may change covariantly both in theory and in Java.
Potential projects: Most of this document lists potential summer projects. The projects are grouped roughly from easiest to most challenging. Many of the projects are applicable beyond Google Summer of Code.
To get started, first do a case study of using the Checker Framework. Do this before submitting your proposal.
@SuppressWarnings, then the annotations are correct, your program is correct, and you don't need feedback. Congratulations! You can try a more significant case study.)
Share the case study as soon as you finish it or as soon as you have a question that is not answered in the manual; don't wait until you submit your proposal. The subject line should be descriptive (not just "Case study", but "Nullness case study of Apache Commons Exec library"). You should give us access to
Once you have done this work on a small program such as from your coursework, you can repeat the process with an open-source program or library.
The primary result of your case study is that you will discover bugs in the subject program, or you will verify that it has no bugs (of some particular type). If you found bugs in open-source code, report them to the program's maintainer, and let us know when they are resolved. If you verified open-source code to be correct, that is great too; let us know and point us at the fully-annotated, verified program.
Another outcome of your case study is that you may discover bugs, limitations, or usability problems in the Checker Framework. Please report them. We'll try to fix them, or they might give you inspiration for improvements you would like to make to the Checker Framework this summer. You can also try to fix them yourself and submit a pull request, but that is not a requirement. You may discuss your ideas with us by sending mail to firstname.lastname@example.org.
Note that we do not recommend that you run many different checkers on small, artificial programs. Instead, run one checker on a more substantial program.
Why should you start with a case study, instead of diving right into fixing bugs, designing a new type system, or making other changes to the Checker Framework? Before you can contribute to any project, you must understand the tool from a user point of view, including its strengths, weaknesses, and how to use it. Therefore, you need to complete a substantive case study first.
We are very happy to answer your questions, and we are eager to interact with you. Before you ask a question, read these “getting started” instructions (that is, this file) and search in the Checker Framework manual for the answer. Don't send us a message that says nothing but “please guide me” or “tell me how to fix this bug”. Such a message shows that you haven't thought about the problem and haven't tried to solve it yourself. It also shows that you have not read this document, and we don't want to work with people who cannot read instructions!
Your questions should show that you will be a productive colleague over the summer: tell us what you have tried, tell us what went wrong or where you got stuck, and ask a concrete technical question that will help you get past your problem. If you can do that, then definitely ask your question, because we don't want you to be stuck or frustrated.
Whenever you send email (related to GSoC or not), please use standard email etiquette, such as: avoid all-caps; use a descriptive subject line; don't put multiple different topics in a single email message; start a new thread with a new subject line when you change the topic; don't clutter discussions with irrelevant remarks; don't use screenshots (unless there is a problem with a GUI), but instead cut-and-paste the code into your message; if you are making a guess, clearly indicate that it is a guess and your grounds for it. If you violate these basic rules, you will look unprofessional, and we don't want you to give a bad impression. Bug reports should be complete and should usually be reported to the issue tracker.
Some GSOC projects have a requirement to fix an issue in the issue tracker. We do not, because it is unproductive. Don't try to start fixing issues before you understand the Checker Framework from the user point of view, which will not happen until you have completed a case study on an open-source program.
To apply, you will submit a single PDF through the Google Summer of Code website. This PDF should contain two main parts. We suggest that you number the parts and subparts to ensure that you don't forget anything, and that we don't overlook anything in your application. You might find it easiest to create multiple PDFs for the different parts, then concatenate them before uploading to the website, but how you create your proposal is entirely up to you.
The proposal should have a descriptive title, both in the PDF and in the GSoC submission system. Don't use a title like "Checker Proposal" or "Proposal for GSoC". Don't distract from content with gratuitous graphics.
If you want to create a new type system (whether one proposed on this webpage or one of your own devising), then your proposal should be the type system's user manual. You don't have to integrate it in the Checker Framework repository (in other words, use any word processor or text editor you want to create a PDF file you will submit), but you should describe your proposed checker's parts in precise English or simple formalisms and you should follow the suggested structure.
List the tasks or subparts that are required to complete your project. This will help you discover a part that you had forgotten. We do not require a detailed timeline, because at this point, you don't know enough to create one.
Never literally cut-and-paste text that was not written by you, because that would be plagiarism. If you quote from text written by someone else, give proper credit.
If you want to do exactly what is already listed on this page, then just say that (but be specific about which one!), and it will not hurt your chances of being selected. However, you might have specific ideas about extensions, about details that are not mentioned on this webpage, about implementation strategies, and so forth. If you want to do a case study, say what program you will do your case study on. Don't submit a proposal that is just a rearrangement of ideas that already appear on this page or in the Checker Framework manual, because it does not help us to assess your likelihood of being successful.
.zipfile or provide a GitHub URL.
The best way to impress us is by doing a thoughtful job in the case study. This may result in you submitting issues against the issue tracker of the program you are annotating or of the Checker Framework. Pull requests against our GitHub project are a plus but are not required: good submitted bugs are just as valuable as bug fixes! You can also make a good impression by correctly answering questions from other students on the GSOC mailing list.
Get feedback! Feel free to ask questions to make your application more competitive. We want you to succeed. Historically, students who start early and get feedback are most successful. You can submit a draft proposal via the Google Summer of Code website, and we will review it. We do not receive any notification when you submit a draft proposal, so if you want feedback, please tell us that. Also, we can only see draft proposals; we cannot see final proposals until after the application deadline has passed.
These projects take an existing type-checker, apply it to a codebase (you can choose your favorite one, or you can ask for suggestions), and determine whether the type system is easy to use and whether it is effective in revealing or preventing defects. Case studies are our most important source of new ideas and improvements: our most useful features have arisen as a result of an observation made during a case study. Many people have started out “just” doing a case study but have ended up making deep, fundamental contributions and even publishing scientific papers about their discoveries.
You should do a small case study during the application process (or maybe a large one, depending on your ambition). A case study is the best way to learn about the Checker Framework, determine whether you would enjoy joining the project during the summer, and show your aptitude so that you will be chosen for the summer.
A set of large case studies is one possible summer task. The most common choice is case studies of a recently-written type system, to determine its usability. Another choice is to annotate popular libraries for an existing type system, to make it more usable.
Here are a few suggestions, but a case study of any type system distributed with the Checker Framework is of value.
When type-checking a method call, the Checker Framework uses the method declaration's annotations. This means that in order to type-check code that uses a library, the Checker Framework needs an annotated version of the library.
The Checker Framework comes with a few annotated libraries. Increasing this number will make the Checker Framework even more useful.
For this project, choose a popular Java library that is not already annotated. (Or, choose a library that is already annotated for some type system, and annotate it for an additional type system. One advantage of this is that the library's build system is already set up to run the Checker Framework. You can tell which type systems a library is annotated for by examining its source code.) There are some specific suggestions below.
Fork the library's source code, adjust its build system to run the Checker Framework, and add annotations to it until the type-checker issues no warnings.
Before you get started, be sure to read How to get started annotating legacy code. More generally, read the relevant sections of the Checker Framework manual.
Show that the ACM library, or the BCEL library, properly handles signature strings (or find bugs in them).
To get started:
git checkout typecheck-signature
Some challenging aspects of this case study are:
someString.replace('.', '/')which converts from
@FieldDescriptor. It also converts from
@BinaryName, but only for non-anonymous classes. The full rules for that, and for other calls such as
someString.replace('/', '.'), need to be worked out and implemented.
An index-out-of-bounds error occurs when a programmer provides an illegal
index to an array or list, as in
i is less than 0 or greater than the length of
a. In languages like C, this is disastrous: buffers
overflows lead to about 1/6 of all security vulnerabilities. In languages
like Java, the result is “merely” that the program crashes. In
both languages, it is desirable to prevent programmers from making this
error and to prevent users from suffering the bad consequences.
We have recently created a static analysis tool that prevents illegal index array exceptions in Java programs. We have not released it because we don't yet know how effective this tool is. Does it scale up to big, interesting programs? Are there common, important code patterns that it fails to handle? Does it produce too many false positive warnings? Does it place too heavy a burden on the user, either in terms of annotations or in terms of complexity of the error messages? Worst of all, are there unknown unsoundnesses in the tool?
This project will be a substantial case study with
Checker. The first goal is to identify its merits and limitations.
The second goal is to improve its precision enough to make it usable by
real-world programmers. A stretch goal is to extend it to handle
collections such as
Lists, where the
remove() method makes sound,
precise analysis very tricky.
Implement support for
Studio support annotations,
and others. Then, do a case study to show the utility (or not) of
The Signedness Checker gives ensures that you do not misuse unsigned values, such as by mixing signed and unsigned values in a computation or by performing a meaningless operation.
Perform a case study of the Signedness Checker, in order to detect errors or guarantee that code is correct.
You will need to find Java packages that use unsigned arithmetic, or that could use unsigned arithmetic but do not.
Here are some possibilities (or, search for code that uses
signedness-sensitive routines in
Your case studies will show the need for enhancements to the Signedness Checker. For example, it does not currently handle boxed integers and BigInteger; these haven't yet come up in case studies but could be worthwhile enhancements.
Java 8 introduced the
class, a container that is either empty or contains a non-null value.
It is intended to solve the the problem of null
pointer exceptions. However,
Optional has its own problems.
Optional's problems, many commentators advise programmers to use
Optional only in limited ways.
The goal of this project is to evaluate
Checker, which warns programmers who
Another goal is to extend the Optional Checker to make it more precise or
to detect other mis-uses of Optional.
Annotate the BCEL library to express its contracts with respect to nullness. Show that the BCEL library has no null pointer exceptions (or find bugs in BCEL).
To get started:
git checkout typecheck-nullness
Some challenging aspects of this case study are:
copy()method. Some implementations of
copy()return null, but are not documented to do so. In addition, some implementations of
copy()catch and ignore exceptions. I think it would be nicest to change the methods to never return null, but to throw an exception instead. (This is no more burdensome to users, who currently have to check for null.) Alternately, the methods could all be documented to return null.
This project is related to the Bazel build system, and was proposed by its development manager.
The Bazel codebase contains 1586 occurrences of the
annotation. This annotation indicates that a variable may hold a null
value. This is valuable documentation and helps programmers avoid null
pointer exceptions that would crash Bazel. However, these annotations are
not checked by any tool. Instead, programmers have to do their best to
@Nullable specifications in the source code. This is
a lost opportunity, since documentation is most useful when it is
automatically processed and verified. (For several years, Google tried
using FindBugs, but they
eventually abandoned it: its analysis is too weak, suffering too many
false positives and false negatives.)
Despite the programmers' best efforts, null pointer exceptions do still creep into the code, impacting users. The Bazel developers would like to prevent these. They want a guarantee, at compile time, that no null pointer exceptions will occur at run time.
Such a tool already exists: the
Checker of the Checker
Framework. It runs as a compiler plug-in, and it issues a warning at
every possible null pointer dereference. If it issues no warnings, the
code is guaranteed not to throw a
NullPointerException at run time.
The goal of this project is to do a large-scale case study of the Nullness
Checker on Bazel. The main goal is to understand how the Nullness Checker
can be used on a large-scale industrial codebase. How many lurking bugs
does it find? What
annotations are missing from the codebase because the developers failed to
write them? What are its limitations, such as code patterns that it cannot
recognize as safe? (You might create new analyses and incorporating them
into the Nullness Checker, or you might just reporting bugs to the Nullness
Checker developers for fixing.) What burdens does it place on users? Is
the cost-benefit tradeoff worth the effort — that is, should Google
adopt this tool more broadly? How should it be improved? Are the most
needed improvements in the precision of the analysis, or in the UI of the
Guava is already partially annotated with nullness annotations — in part by Guava's developers, and in part by the Checker Framework team. However, Guava does not yet type-check without errors. Doing so could find more errors (the Checker Framework has found nullness and indexing errors in Guava in the past) and would be a good case study to learn the limitations of the Nullness Checker.
By default, the Checker Framework is unsound in several circumstances. “Unsound” means that the Checker Framework may report no warning even though the program can misbehave at run time.
The reason that the Checker Framework is unsound is that we believe that enabling these checks would cause too many false positive warnings: warnings that the Checker Framework issues because it cannot prove that the code is safe (even though a human can see that the code is safe). Having too many false positive warnings would irritate users and lead them not to use the checker at all, or would force them to simply disable those checks.
We would like to do studies of these command-line options to see whether our guess is right. Is it prohibitive to enable sound checking? Or can we think of enhancements that would let us turn on those checks that are currently disabled by default?
Many other tools exist for prevention of programming errors, such as Error Prone, NullAway, FindBugs, JLint, PMD, and IDEs such as Eclipse and IntelliJ. These tools are not as powerful as the Checker Framework (some are bug finders rather than verification tools, and some perform a shallower analysis), but they may be easier to use. Programmers who use these tools wonder, "Is it worth my time to switch to using the Checker Framework?"
The goal of this project is to perform a head-to-head comparison of as many different tools as possible. You will quantify:
This project will help programmers to choose among the different tools — it will show when a programmer should or should not use the Checker Framework. This project will also indicate how each tool should be improved.
One place to start would be with an old version of a program that is known to contain bugs. Or, start with the latest version of the program and re-introduce fixed bugs. (Either of these is more realistic than introducing artificial bugs into the program.) A possibility would be to use the Lookup program that has been used in previous case studies.
The Checker Framework is shipped with about 20 type-checkers. Users can create a new checker of their own. However, some users don't want to go to that trouble. They would like to have more type-checkers packaged with the Checker Framework for easy use.
Each of these projects requires you to design a new type system, implement it, and perform case studies to demonstrate that it is both usable and effective in finding/preventing bugs.
Programs are easier to use and debug if their output is deterministic. For example, it is easier to test a deterministic program, because nondeterminism can lead to flaky tests that sometimes succeed and sometimes fail. As another example, it is easier for a user or programmer to compare two deterministic executions than two nondeterministic executions.
A number of Java methods return nondeterministic results, making any program that uses them potentially nondeterministic. Here are a few examples:
You can find more examples of non-deterministic specifications, and suggestions for how to avoid them, in the Randoop manual and in the ICST 2016 paper Detecting assumptions on deterministic implementations of non-deterministic specifications by A. Shi, A. Gyori, O. Legunsen, and D. Marinov, which presents the NonDex tool.
The NonDex tool works dynamically, which means that it cannot detect all user-visible nondeterminism nor give a guarantee of correctness — a guarantee that the the program is deterministic from the user's point of view.
The goal of this project is to create a tool, based on a type system, that gives a guarantee. The tool would report to the user all possible nondeterminism in a program, so that the user can fix the program before it causes problems during testing or in the field.
More concretely, this problem can be handled by creating two simple type systems that indicate whether a given value is deterministic. In each diagram, the supertype appears above the subtype.
@PossiblyNonDeterministic @PossiblyNonDeterministicOrder | | @Deterministic @DeterministicOrder
The programmer would annotate routines that are expected to take deterministic inputs. (An example could be all printing routines.) Then, the type system would issue a warning whenever one of those routines is called on a possibly non-deterministic value.
The standard library would have annotations for
You can find a draft manual chapter that documents a possible design for a Determinism Checker. It differs slightly from the above proposal, for instance by having a single type hierarchy instead of two.
The Checker Framework comes with a Tainting Checker that is so general that it is not good for much of anything. In order to be useful in a particular domain, a user must customize it:
@Untaintedqualifiers to something more specific (such as
The first part of this project is to make this customization easier to do — preferably, a user does not have to change any code in the Checker Framework, as is currently the case for the Subtyping Checker. As part of making customization easier, a user should be able to specify multiple levels of taint — many information classification hierarchies have more than two levels (for example, the US government separates classified information into three categories: Confidential, Secret, and Top Secret).
The second part of this project is to provide several examples, and do case studies showing the utility of compile-time taint checking.
Possible examples include:
@PrivacySinkannotations used by the Facebook Infer static analyzer.
For some microbenchmarks, see the Juliette test suite for Java from CWE.
Windows cannot run command lines longer than 8191 characters. Creating a too-long command line causes failures when the program is run on Windows. These failures are irritating when discovered during testing, and embarrassing or worse when discovered during deployment. The same command line would work on Unix, which has longer command-line limits, and as a result developers may not realize that their change to a command can cause such a problem.
Programmers would like to enforce that they don't accidentally pass a
too-long string to the
exec() routine. The goal of this
project is to give a compile-time tool that provides such a guarantee.
Here are two possible solutions.
Simple solution: For each array and list, determine whether its length is known at compile time. The routines that build a command line are only allowed to take such constant-length lists, on the assumption that if the length is constant, its concatenation is probably short enough.
More complex solution:
For each String, have a compile-time estimate of its maximum length. Only
exec() to be called on strings whose estimate is no more than 8191.
String concatenation would return a string whose estimated size is the sum
of the maximums of its arguments, and likewise for concatenating an array
or list of strings.
Overflow is when 32-bit arithmetic differs from ideal arithmetic. For
example, in Java the
int computation 2,147,483,647 + 1 yields
a negative number, -2,147,483,648. The goal of this project is to detect
and prevent problems such as these.
As a concrete application,
Checker is currently unsound in the presence of integer overflow. If
i is known to be
@Positive, and 1 is
added to it, then the Index Checker believes that its type
Integer.MAX_VALUE, then this type would be false.
This project involves removing this unsoundness by implementing a type system to track when an integer value might overflow. Implementing such a type system would permit the Index Checker to extend its guarantees even to programs that might overflow. We have noticed that this is important for some indexing bugs in practice (using the Index Checker, we found 5 bugs in Google Guava related to overflow). A key challenge will be keeping the number of false positives in this new type system low: previous attempts to build static analyses for integer overflows have either been unsound or had high false positive rates. Because we are using this analysis for a specialized purpose (i.e. only the types are only important when the Index Checker's analysis might become unsound), false positives may be less of a concern.
The Lock Checker prevents race conditions by ensuring that locks are held when they need to be. It does not prevent deadlocks that can result from locks being acquired in the wrong order. This project would extend the Lock Checker to address deadlocks, or create a new checker to do so.
Suppose that a program contains two different locks. Suppose that one thread tries to acquire lockA then lockB, and another thread tries to acquire lockB then lockA, and each thread acquires its first lock. Then both locks will wait forever for the other lock to become available. The program will not make any more progress and is said to be deadlocked.
If all threads acquire locks in the same order — in our example, say lockA then lockB — then deadlocks do not happen. You will extend the Lock Checker to verify this property.
The Index Checker is currently restricted to fixed-size data structures. A fixed-size data structure is one whose length cannot be changed once it is created; examples of fixed-size data structures are arrays and Strings. This limitation prevents the Index Checker from verifying indexing operations on mutable-size data structures, like Lists, that have add or remove methods. Since these kind of collections are common in practice, this is a severe limitation for the Index Checker.
The limitation is caused by the Index Checker's use of types that are dependent on the length of data structures,
data_structure's length could change,
then the correctness of this type might change.
A naive solution would be to invalidate these types any time a method is called on
Unfortunately, aliasing makes this still unsound. Even more, a great solution to this problem would keep
the information in the type when a method like add or remove is called on
A more complete solution might involve some special annotations on List that permit the information to be persisted.
This project would involve designing and implementing a solution to this problem.
Verifying a program to be free of errors can be a daunting task. When starting out, a user may be more interested in bug-finding than verification. The goal of this project is to create a nullness bug detector that uses the powerful analysis of the Checker Framework and its Nullness Checker, but omits some of its more confusing or expensive features. The goal is to create a fast, easy-to-use bug detector. It would enable users to start small and advance to full verification in the future, rather than having to start out doing full verification.
This could be structured as a new NullnessLight Checker, or as a command-line argument to the current Nullness Checker. Here are some differences from the real Nullness checker:
Map.get, the given key appears in the map.
@Pure: it returns the same value on every call.
Each of these behaviors should be controlled by its own command-line argument, as well as being enabled in the NullnessLight Checker.
The implementation may be relatively straightforward, since in most cases the behavior is just to disable some functionality of existing checkers.
It will be interesting to compare this NullnessLight Checker to the regular Nullness Checker:
Uber's NullAway tool may be an implementation of this idea (that is, a fast, but incomplete and unsound, nullness checker). Does Uber's tool provide users a good introduction to the ideas that a user can use to transition to a nullness type system later?
This project is to improve support for typestate checking
Ordinarily, a program variable has
the same type throughout its lifetime from when the variable is declared
until it goes out of scope. “Typestate”
permits the type of an object or variable to change in a controlled way.
Essentially, it is a combination of standard type systems with dataflow
analysis. For instance, a file object changes from unopened, to opened, to
closed; certain operations such as writing to the file are only permitted
when the file is in the opened typestate. Another way of saying this is
write is permitted after
open, but not after
is applicable to many other types of software properties as well.
Two typestate checking frameworks exist for the Checker Framework. Neither is being maintained; a new one needs to be written.
We also welcome your ideas for new type systems. For example, any run-time failure can probably be prevented at compile time with the right analysis. Can you come up with a way to fix your pet peeve?
It is easiest, but not required, to choose an existing type system from the literature, since that means you can skip the design stage and go right to implementation.
This task can be simple or very challenging, depending on how ambitious the type system is. Remember to focus on what helps a software developer most!
A number of type annotations take, as an argument, a Java expression. The parser for these is a hack. The goal of this project is to replace it by calls to JavaParser. This should be straightforward, since JavaParser is already used in other parts of the Checker Framework.
File Utilities, or AFU, insert annotations into, and extract
and text files. These programs were written before the
ASM bytecode library supported Java 8's
type annotations. Therefore, the AFU has its own custom version of ASM
that supports type annotations. Now that ASM 5 has been released and it
supports type annotations, the AFU needs to be slightly changed to use
the official ASM 5 library instead of its own custom ASM variant.
This project is a good way to learn about
.class files and
Java bytecodes: how they are stored, and how to manipulate them.
Many program analyses are too verbose to expect a person to read their entire output. However, after a program change (whether a refactoring, a bug fix, an enhancement, or addition of new code), the difference is likely to be small between the analysis run on the old code and the analysis run on the new code. Showing this to a developer may be useful, and in particular can help the programmer to better understand the changes he or she has made.
The analysis diff tool would take as input two analysis results (the previous and the current one). It would output only the new parts of its second input. (Or, it could output a complete diff between two analysis results.)
A concrete example of an analysis diff tool is checklink-persistent-errors; see the documentation at the top of the file. That tool only works for one particular analysis, the W3C Link Checker. Something of this sort also appears to be built into FindBugs.
Analysis diff would be useful in other contexts. One example is bug bug detection tools. such as FindBugs or the Checker Framework, whose output can be extremely verbose when first run on a program. Another example is inference tools; for example, DynComp is an inference tool whose output could become manageable to users if they were shown it in small doses. You can probably think of other examples.
A type system is useful because it prevents certain errors. The downside of a type system is the effort required to write the types. Type inference is the process of writing the types for a program.
Type-checking is a modular, or local, analysis. For example, given a procedure in which types have been written, a type-checker can verify the procedure's types without examining the implementation of any other procedure.
By contrast, type inference is a non-local, whole-program analysis. For example, to determine what type should be written for a procedure's formal parameter, it is necessary to examine the type of the argument at every call to that procedure. At every call, to determine the type of some argument A, it may be necessary to know the types of the formal parameters to the procedure that contains A, and so forth. It is possible to resolve this seemingly-infinite regress, but only by examining the entire program in the worst case.
The differences between type checking and type inference means that they are usually written in very different ways. Type inference is usually done by first collecting all of the constraints for the entire program, then passing them to a specialized solver. Writing a type inference tool is harder. Worst of all, it's annoying to encode all the type rules twice in different ways: once for the type checker and once for the type inference.
As a result, many type systems have a type checker but no type inference tool. This makes programmers reluctant to use these type systems, which denies programmers the benefits of type-checking.
The goal of this project is to automatically create type inference tools from type-checking tools, so that it is not necessary for the type system designer to implement the type system twice in different ways.
A key insight is that the type-checker already encodes all knowledge about what is a legal, well-typed program. How can we exploit that for the purpose of type inference as well as type-checking? The idea is to iteratively run the type-checker, multiple times, observing what types are passed around the program and what errors occur. Each iteration collects more information, until there is nothing more to learn.
This approach has some disadvantages: it is theoretically slower, and theoretically less accurate, than a purpose-built type inference tool for each type system. However, it has the major advantage that it requires no extra work to implement a type inference tool. Furthermore, maybe it works well enough in practice.
A prototype implementation of this idea already exists, but it needs to be evaluated in order to discover its flaws, improve its design, and discover how accurate it is in practice.
The Checker Framework's dataflow framework (manual here) implements flow-sensitive type refinement (local type inference) and other features. It is used in the Checker Framework and also in Error Prone, NullAway, and elsewhere.
There are a number of open issues — both bugs and feature requests — related to the dataflow framework. The goal of this project is to address as many of those issues as possible, which will directly improve all the tools that use it.
A program analysis technique makes estimates about the current values of expressions. When a method call occurs, the analysis has to throw away most of its estimates, because the method call might change any variable. If the method is known to have no side effects, then the analysis doesn't need to throw away its estimates, and the analysis is more precise.
For example, the Checker Framework unsoundly trusts but does not check purity annotations. This makes the system vulnerable to programmer mistakes when writing annotations. The Checker Framework contains a sound checker for immutability annotations, but it suffers too many false positive warnings and thus is not usable. A better checker is necessary. It will also incorporate aspects of an escape analysis.
Choosing an algorithm from the literature is the best choice, but there still might be research work to do: in the past, when implementing algorithms from research papers, we have sometimes found that they did not work as well as claimed, and we have had to enhance them. One challenge is that any technique used by pluggable type-checking to verify immutability must be modular, but many side effect analyses require examining the whole program. The system should require few or no method annotations within method bodies. I'm not sure whether such a system already exists or we need to design a new one.
Currently, type annotations are only displayed in Javadoc if they are explicitly written by the programmer. However, the Checker Framework provides flexible defaulting mechanisms, reducing the annotation overhead. This project will integrate the Checker Framework defaulting phase with Javadoc, showing the signatures after applying defaulting rules.
The Checker Framework runs much slower than the standard javac compiler — often 20 times slower! This is not acceptable as part of a developer's regular process, so we need to speed up the Checker Framework. This project involves determining the cause of slowness in the Checker Framework, and correcting those problems.
This is a good way to learn about performance tuning for Java applications.
Some concrete tasks include:
Elements. Interning could save time when doing comparisons. You can verify the correctness of the optimization by running the Interning Checker on the Checker Framework code. Compare the run time of the Checker Framework before and after this optimization.
Implement run-time checking to complement compile-time checking. This will let users combine the power of static checking with that of dynamic checking.
Every type system is too strict: it rejects some programs that never go wrong at run time. A human must insert a type loophole to make such a program type-check. For example, Java takes this approach with its cast operation (and in some other places).
When doing type-checking, it is desirable to automatically insert run-time checks at each operation that the static checker was unable to verify. (Again, Java takes exactly this approach.) This guards against mistakes by the human who inserted the type loopholes. A nice property of this approach is that it enables you to prevent errors in a program with no type annotations: whenever the static checker is unable to verify an operation, it would insert a dynamic check. Run-time checking would also be useful in verifying whether the suppressed warnings are correct — whether the programmer made a mistake when writing them.
The annotation processor (the pluggable type-checker) should automatically insert the checks, as part of the compilation process.
There should be various modes for the run-time checks:
The run-time penalty should be small: a run-time check is necessary only at the location of each cast or suppressed warning. Everywhere that the compile-time checker reports no possible error, there is no need to insert a check. But, it will be an interesting project to determine how to minimize the run-time cost.
Another interesting, and more challenging, design question is whether you need to add and maintain a run-time representation of the property being tested. It's easy to test whether a particular value is null, but how do you test whether it is tainted, or should be treated as immutable? For a more concrete example, see the discussion of the (not yet implemented) [Javari run-time checker](http://pag.csail.mit.edu/pubs/ref-immutability-oopsla2005-abstract.html). Adding this run-time support would be an interesting and challenging project.
We developed a prototype for the EnerJ runtime system. That code could be used as starting point, or you could start afresh.
In the short term, this could be prototyped as a source- or bytecode-rewriting approach; but integrating it into the type checker is a better long-term implementation strategy.
The Checker Framework comes with support for external tools, including both IDEs (such as an Eclipse plug-in) and build tools (instructions for Maven, etc.).
These plug-ins and other integration should be improved. We have a number of concrete ideas, but you will also probably come up with some after a few minutes of using the existing IDE plugins!
This is only a task for someone who is already an expert, such as someone who has built IDE plugins before or is very familiar with the build system. One reason is that these tools tend to be complex, which can lead to subtle problems. Another reason is that we don't want to be stuck maintaining code written by someone who is just learning how to write an IDE plugin.
Rather than modifying the Checker Framework's existing support or building new support from scratch, it may be better to adapt some other project's support for build systems and IDEs. For instance, you might make coala support the Checker Framework, or you might adapt the tool integration provided by Error Prone.
Design and implement an algorithm to check type soundness of a type system by exhaustively verifying the type checker on all programs up to a certain size. The challenge lies in efficient enumeration of all programs and avoiding redundant checks, and in knowing the expected outcome of the tests. This approach is related to bounded exhaustive testing and model checking; for a reference, see [Efficient Software Model Checking of Soundness of Type Systems](http://www.eecs.umich.edu/~bchandra/publications/oopsla08.pdf).