Projects
NoTamper, WAPTEC, Plato: Web Form Analysis and Synthesis (2008-Present)
To complicate matters, for the sake of security any error checking that occurs on the web form must be replicated on the web server that processes that form; otherwise, an attacker who submits data to the server without going through the web form could force the server to perform actions that violate the spirit of the application. For example, a banking application might provide a web form enabling money to be transferred between two accounts owned by the user. If the web form forces the user to select two accounts she owns but the server fails to check that both accounts are owned by the user submitting the request, a malicious user could transfer arbitrary amounts of money between two arbitrary accounts by submitting a request directly to the server.
To address these issues, we developed three tools. NoTamper and WAPTEC are tools that automatically detect constraints on user input that are enforced by the web form but not by the web server. NoTamper discovered the flaw in the banking application discussed above. NoTamper treats the web server as a black box, making it applicable to a wide range of web applications; WAPTEC treats the web server as a white box, and while more accurate and complete is only applicable when the web server is written in PHP.
The third tool, Plato, takes a different approach. Instead of detecting the presence of mismatches between the web form and the web server, Plato automatically generates the web form and the error-checking code on the web server, ensuring that all of the error checks performed by the form are also performed by the server. Plato takes as input a logical representation of the intended semantics for the data accepted by the form and generates (i) a web form that both checks for errors and fills in values implied by user-provided data and (ii) web server code that when invoked identifies errors and/or computes implied values.
Papers:
- Plato: A Compiler for Interactive Web Forms (PADL 2011) (Tech Report with proofs)
- WAPTEC: Whitebox Analysis of Web Applications for Parameter Tampering Exploit Construction (CCS 2011)
- NoTamper: Automatic Blackbox Detection of Parameter Tampering Opportunities in Web Applications (CCS 2010)
- Automatic Web Form Construction via Compilation of Paraconsistent Entailment to Relational Databases (Tech Report 2010)
- SmartForm: A Web-based feature configuration tool (VAMOS 2009)
- Inconsistency-tolerant Reasoning with Classical Logic and Large Databases (SARA 2009)
TBA: Tag-Based Authorization (2009-Present)
TBA (Tag-Based Authorization) was designed as a replacement for the current U.S. scheme, which is based on Bell-LaPadula. In TBA, both users and documents are assigned tags representing their pertinent characteristics, and access control decisions are made based solely on the tags of the users and documents. Those decisions are expressed in a logical language that was designed for simplicity while allowing for modular policy authoring via delegation. TBA achieves the same scalability of Bell-LaPadula because tagging can be accomplished by relatively untrained users, yet achieves nearly the same flexiblity as many other logic-based access control solutions in the literature because the majority of the access control policy is expressed in a logical language.
Papers:
FML: Declarative Network Management (2007-2009)
FML (Flow Management Language) is the declarative language that NOX uses for expressing network security policies. FML admits collaborative policy authorship, conflict identification and resolution, policy relaxation and tightening through the addition of statements, and extremely efficient processing.
Papers:
- Practical Declarative Network Management (SIGCOMM Workshop)
- Expressing and Enforcing Flow-based Network Security Policies Language (Tech Report)
- Used as the access control language for a commercial product by Citrix
Collaborative Programming (2008)
Technology that supports Collaborative Programming must be able to combine independently authored instruction sets and be tolerant of incompleteness and conflicts. This project endeavors to understand fundamentally the role that contradictions play in computer science: should they remain relegated to information integration problems or should they play a more central role in the field, eventually being embraced by software engineers?
Papers:
Collaborative Programming: Applications of Logic and Automated Reasoning (IJCAR Workshop)
Herbrand Logic (2006-2008)
Herbrand logic builds in some of the central assumptions found in deductive databases, logic programming, constraint satisfaction, and (some) formal verification techniques. It turns out that Herbrand logic is more expressive than first-order logic. For example, arithmetic for the natural numbers appears to be finitely axiomatizable in Herbrand logic while it is not even recursively enumerable in first-order logic. The price for this greater expressiveness is lack of computability: neither entailment nor satisfiability is semi-decidable. Nevertheless, it allows computer scientists to express more of the problems they care about solving.
Paper: Herbrand Logic (Tech Report)
Extensional Reasoning (2001-2008)
It turns out that for complete logical theories, i.e. those satisfied by essentially a single model, the major hurdle in translating a theory into a database system is detecting that the theory is complete. The case of incomplete theories is theoretically more interesting because each of the models that satisfy the theory must be compiled out into a single model; thus, Extensional Reasoning in the case of incomplete theories amounts to a form of theory-completion. Once the theory has been completed appropriately, translating into a database is straightforward.
Papers:
General Game Playing (2005-2007)
General Game Playing systems are characterized by their use of general cognitive information processing technologies (such as knowledge representation, reasoning, learning, and rational behavior). Unlike specialized game playing systems (such as Deep Blue), they do not rely exclusively on algorithms designed in advance for specific games. Deep Blue may have beaten the world chess champion, but it cannot play checkers or even balance a check book.
General Game Playing is a recent initiative developed by the Stanford Logic Group. To foster research in this area, we developed an online testbed that allows players to compete in multi-player games; we ran a GGP competition at AAAI the last two years; we taught classes on GGP at Stanford and supported GGP classes in two universities in Germany.
Papers:
General Game Playing: Game Description Language Specification (Tech Report)
Automatically Proving Playability Through Abstraction (Tech Report)
Transfer Learning (2005-2007)
Our role in this project is to provide a formal foundation for this work and a testbed built on this foundation for evaluating an agent's ability to perform transfer. Our approach leverages the General Game Playing framework to formalize the notion of a problem and mappings between general games to formalize relationships between problems.
Paper: Transfer Learning Level Definitions (Tech Report)
HP-Utility: Utility computing (2004-2005)
Currently, designing, constructing, and maintaining those resources is done by hand. HP wants to automate more of that process by employing declarative descriptions of resources and their configurations so that results from constraint satisfaction and planning can be brought to bear on the problem.
Papers:
Object-Oriented Constraint Satisfaction Problems (Tech Report)
Using Object-Oriented Constraint Satisfaction for Automated Configuration Generation (DSOM 2004)