We provide a simpler alternative proof of the PSPACE-hardness of propositional Hoare logic.
We use Kleene algebra with tests to verify a wide assortment of common compiler optimizations, including dead code elimination, common subexpression elimination, copy propagation, loop hoisting, induction variable elimination, instruction scheduling, algebraic simplification, loop unrolling, elimination of redundant instructions, array bounds check elimination, and introduction of sentinels. In each of these cases, we give a formal equational proof of the correctness of the optimizing transformation.
We investigate the completeness of Hoare Logic on the propositional level. In particular, the expressiveness requirements of Cook's proof are characterized propositionally. We give a completeness result for Propositional Hoare Logic (PHL): all relationally valid rules
are derivable in PHL, provided the propositional expressiveness conditions are met. Moreover, if the programs pi in the premises are atomic, no expressiveness assumptions are needed.
We show that propositional Hoare logic is subsumed by the type calculus of typed Kleene algebra augmented with subtypes and typecasting. Assertions are interpreted as typecast operators. Thus Hoare-style reasoning with partial correctness assertions reduces to typechecking in this system.
Parikh's Theorem says that the commutative
image of every context free language is the commutative image of some regular
set. Pilling has shown that this theorem is essentially a statement about
least solutions of polynomial inequalities. We prove the following general
theorem of commutative Kleene algebra, of which Parikh's and Pilling's
theorems are special cases: Every system of polynomial inequalities fi(x1,...,xn)
<= xi, 1 <= i <= n, over a commutative Kleene algebra
K has a unique least solution in Kn; moreover, the components
of the solution are given by polynomials in the coefficients of the fi.
We also give a closed-form solution in terms of the Jacobian matrix.
We show that Kleene algebra with tests (KAT) subsumes propositional Hoare logic (PHL). Thus the specialized syntax and deductive apparatus of Hoare logic are inessential and can be replaced by simple equational reasoning. In addition, we show that all relationally valid inference rules are derivable in KAT and that deciding the relational validity of such rules is PSPACE-complete.
In previous work we have found it necessary
to argue that certain theorems of Kleene algebra hold even when the symbols
are interpreted as nonsquare matrices. In this note we define and investigate
typed Kleene algebra, a typed version of Kleene algebra in which objects
have types s -> t. Although nonsquare matrices are the principal motivation,
there are many other useful interpretations: traces, binary relations,
Kleene algebra with tests. We give a set of typing rules and show that
every expression has a unique most general typing (mgt). Then we
prove the following metatheorem that incorporates the abovementioned results
for nonsquare matrices as special cases. Call an expression 1-free
if it contains only the Kleene algebra operators (binary) +, (unary) +,
0, and ·, but no occurrence of 1 or *. Then every universal 1-free
formula that is a theorem of Kleene algebra is also a theorem of typed
Kleene algebra under its most general typing. The metatheorem is false
without the restriction to 1-free formulas.
We study the complexity of reasoning in Kleene
algebra and *-continuous Kleene algebra in the presence of extra assumptions;
i.e., the complexity of deciding the validity of universal Horn formulas
E --> s=t, where E is a finite set of equations. We obtain various levels
of complexity based on the form of the assumptions E. Our main results
are: for *-continuous Kleene algebra,
The equational theory of Kleene algebras with
tests (KAT) has been shown to be decidable in at most exponential time
by an efficient reduction to Propositional Dynamic Logic. We show that
the theory is PSPACE-complete. Thus KAT, while considerably more expressive
than Kleene algebra, is no more difficult to decide.
We prove the completeness of the equational
theory of Kleene algebra with tests and *-continuous Kleene algebra with
tests over language-theoretic and relational models. We also show decidability.
Cohen's reduction of Kleene algebra with hypotheses of the form r=0 to
Kleene algebra without hypotheses is simplified and extended to handle
Kleene algebras with tests.
We introduce Kleene algebra with tests,
an equational system for manipulating programs. We give a purely equational
proof, using Kleene algebra with tests and commutativity conditions, of
the following classical result: every while program can be simulated
by a while program with at most one while loop. The proof
illustrates the use of Kleene algebra with tests and commutativity conditions
in program equivalence proofs. We also show that the universal Horn theory
of *-continuous Kleene algebras is not finitely axiomatizable.
We give a finitary axiomatization of the algebra
of regular events involving only equations and equational implications.
Unlike Salomaa's axiomatizations, the axiomatization given here is sound
for all interpretations over Kleene algebras.
Action algebras have been proposed by Pratt
as an alternative to Kleene algebras. Their chief advantage over Kleene
algebras is that they form a finitely-based equational variety, so the
essential properties of * (iteration) are captured purely equationally.
However, unlike Kleene algebras, they are not closed under the formation
of matrices, which renders them inapplicable in certain constructions in
automata theory and the design and analysis of algorithms. In this paper
we consider a class of action algebras called action lattices. An action
lattice is simply an action algebra that forms a lattice under its natural
order. Action lattices combine the best features of Kleene algebras and
action algebras: like action algebras, they form a finitely-based equational
variety; like Kleene algebras, they are closed under the formation of matrices.
Moreover, they form the largest subvariety of action algebras for which
this is true. All common examples of Kleene algebras appearing in automata
theory, logics of programs, relational algebra, and the design and analysis
of algorithms are action lattices.
Kleene algebras are an important class of algebraic
structures that arise in diverse areas of computer science: program logic
and semantics, relational algebra, automata theory, and the design and
analysis of algorithms. The literature contains several inequivalent definitions
of Kleene algebras and related algebraic structures. In this paper we establish
some new relationships among these structures. Our main results are: (i)
There is a Kleene algebra that is not *-continuous. (ii) The categories
of *-continuous Kleene algebras, closed semirings, and S-algebras are strongly
related by adjunctions. (iii) The axioms of Kleene algebra are not complete
for the universal Horn theory of the regular events. This refutes a conjecture
of Conway. (iv) Right-handed Kleene algebras are not necessarily left-handed
Kleene algebras. This verifies a weaker version of a conjecture of Pratt.
Set constraints are inclusions between expressions
denoting set of ground terms over a finitely ranked alphabet. Rational
spaces are topological spaces obtained as spaces of runs of topological
hypergraphs. They were introduced by Kozen, who described the topological
structure of spaces of solutions to systems of set constraints in terms
of rational spaces. In this paper we continue the investigation of rational
spaces. We give a Myhill-Nerode like characterization of rational points,
which in turn is used to rederive results about the rational points of
finitary rational spaces. We define congruences on hypergraphs and investigate
their interplay with the Myhill-Nerode characterization, and determine
the computational complexity of some decision problems related to rational
spaces.
Set constraints are inclusion relations between
expressions denoting sets of ground terms over a ranked alphabet. They
are the main ingredient in set-based program analysis. In this paper we
provide a Gentzen-style axiomatization for sequents P |- Q, where P and
Q are finite sets of set constraints, based on the axioms of termset algebra.
Sequents of the restricted form P |- false correspond to positive
set constraints, and those of the more general form P |- Q correspond to
systems of mixed positive and negative set constraints. We show that the
deductive system is (i) complete for the restricted sequents P |- false
over standard models, (ii) incomplete for general sequents P |- Q over
standard models, but (iii) complete for general sequents over set-theoretic
termset algebras.
Set constraints are inclusions between expressions
denoting sets of ground terms. They have been used extensively in program
analysis and type inference. In this paper we investigate the topological
structure of the spaces of solutions to systems of set constraints. We
identify a family of topological spaces called rational spaces,
which formalize the notion of a topological space with a regular or self-similar
structure, such as the Cantor discontinuum or the space of runs of a finite
automaton. We develop the basic theory of rational spaces and derive generalizations
and proofs from topological principles of some results in the literature
on set constraints.
Set constraints are inclusion relations between
expressions denoting sets of ground terms over a ranked alphabet. They
are the main ingredient in set-based program analysis. In this paper we
describe a constraint logic programming language CLP(SC) over set constraints
in the style of Jaffar and Lassez. The language subsumes ordinary logic
programs over an Herbrand domain. We give an efficient unification algorithm
and operational, declarative, and fixpoint semantics. We show how the language
can be applied in set-based program analysis by deriving explicitly the
monadic approximation of the collecting semantics of Heintze and Jaffar.
Set constraints are inclusion relations between
sets of ground terms over a ranked alphabet. They have been used extensively
in program analysis and type inference. Here we present an equational axiomatization
of the algebra of set constraints. Models of these axioms are called termset
algebras. They are related to the Boolean algebras with operators of
Jonsson and Tarski. We also define a family of combinatorial models called
topological
term automata, which are essentially the term automata studied by Kozen,
Palsberg, and Schwartzbach endowed with a topology such that all relevant
operations are continuous. These models are similar to Kripke frames for
modal or dynamic logic. We establish a Stone duality between termset algebras
and topological term automata, and use this to derive a completeness theorem
for a related multidimensional modal logic. Finally, we prove a small model
property by filtration, and argue that this result contains the essence
of several algorithms appearing in the literature on set constraints.
Set constraints are relations between
sets of terms. They have been used extensively in various applications
in program analysis and type inference. We present several results on the
computational complexity of solving systems of set constraints. The systems
we study form a natural complexity hierarchy depending on the form of the
constraint language.
Set constraints are relations between
sets of terms. They have been used extensively in various applications
in program analysis and type inference. Recently, several algorithms for
solving general systems of positive set constraints have appeared. In this
paper we consider systems of mixed positive and negative constraints, which
are considerably more expressive than positive constraints alone. We show
that it is decidable whether a given such system has a solution. The proof
involves a reduction to a number-theoretic decision problem that may be
of independent interest.
Partial types for the lambda-calculus were
introduced by Thatte in 1988 as a means of typing objects that are not
typable with simple types, such as heterogeneous lists and persistent data.
In that paper he showed that type inference for partial types was semidecidable.
Decidability remained open until quite recently, when O'Keefe and Wand
gave an exponential time algorithm for type inference. In this paper we
give an O(n3) algorithm. Our algorithm constructs a certain
finite automaton that represents a canonical solution to a given set of
type constraints. Moreover, the construction works equally well for recursive
types; this solves an open problem of O'Keefe and Wand.
Subtyping in the presence of recursive types
for the lambda-calculus was studied by Amadio and Cardelli in 1991. In
that paper they showed that the problem of deciding whether one recursive
type is a subtype of another is decidable in exponential time. In this
paper we give an O(n2) algorithm. Our algorithm is based on
a simplification of the definition of the subtype relation, which allows
us to reduce the problem to the emptiness problem for a certain finite
automaton with quadratically many states. It is known that equality of
recursive types and the covariant Böhm order can be decided efficiently
by means of finite automata, since they are just language equality and
language inclusion, respectively. Our results extend the automata-theoretic
approach to handle orderings based on contravariance.
The modular group occupies a central position
in many branches of mathematical sciences. In this paper we give average
polynomial-time algorithms for the unbounded and bounded membership problems
for finitely generated subgroups of the modular group. The latter result
affirms a conjecture of Gurevich.
Functional decomposition---whether a function
f(x) can be written as a composition of functions g(h(x)) in a nontrivial
way---is an important primitive in symbolic computation systems. The problem
of univariate polynomial decomposition was shown to have an efficient solution
by Kozen and Landau. Dickerson and von zur Gathen gave algorithms for certain
multivariate cases. Zippel showed how to decompose rational functions.
In this paper, we address the issue of decomposition of algebraic functions.
We show that the problem is related to univariate resultants in algebraic
function fields, and in fact can be reformulated as a problem of resultant
decomposition. We characterize all decompositions of a given algebraic
function up to isomorphism, and give an exponential time algorithm for
finding a nontrivial one if it exists. The algorithm involves genus calculations
and constructing transcendental generators of fields of genus zero.
We examine the question of when a polynomial
f
over a commutative ring has a nontrivial functional decomposition
f(x)
= g(h(x)). Previous algorithms (Barton and Zippel 76, Barton and Zippel
85, Alagar and Thahn 85) are exponential-time in the worst case, require
polynomial factorization, and only work over fields of characteristic 0.
We present an O(n2)-time algorithm. We also show that
the problem is in NC. The algorithm does not use polynomial factorization,
and works over any commutative ring containing a multiplicative inverse
of r. Finally, we give a new structure theorem that leads to necessary
and sufficient algebraic conditions for decomposibility over any field.
We apply this theorem to obtain an NC algorithm for decomposing irreducible
polynomials over finite fields, and a subexponential algorithm for decomposing
irreducible polynomials over any field admitting efficient polynomial factorization.
We give a new algorithm for resolving singularities
of plane curves. The algorithm is polynomial time in the bit complexity
model, does not require factorization, and works over the rationals or
finite fields.
In his study of Newton's root approximation method, Smale defined the Newtonian
graph of a complex univariate polynomial f. The vertices of this graph
are the roots of f and f' and the edges are the degenerate curves of flow
of the Newtonian vector field Nf(z) = -f(z)/f'(z). The embedded
edges of this graph form the boundaries of root basins in Newton's root
approximation method. The graph defines a treelike relation on the roots
of f and f', similar to the linear order when f has only real roots. We
give an efficient algebraic algorithm based on cell decomposition to compute
the Newtonian graph. The resulting structure can be used to query whether
two complex points are in the same basin. This suggests a modified version
of Newton's method in which one can test whether a step has crossed a basin
boundary. We show that this modified version does not necessarily converge
to a root. Stefansson has recently extended this algorithm to handle rational
and algebraic functions without significant increase in complexity. He
has shown that the Newtonian graph tesselates the associated Riemann surface
and can be used in conjunction with Euler's formula to give an NC algorithm
to calculate the genus of an algebraic curve.
This is a survey paper on resultants. From
the introduction:
A resultant is a purely algebraic criterion for determining whether a finite collection of polynomials have a common zero. It has been shown to be a useful tool in the design of efficient parallel and sequential algorithms in symbolic algebra, computational geometry, computational number theory, and robotics.
We begin with a brief history of resultants
and a discussion of some of their important applications. Next we review
some of the mathematical background in commutative algebra that will be
used in subsequent sections. The Nullstellensatz of Hilbert is presented
in both its strong and weak forms. We also discuss briefly the necessary
background on graded algebras, and define affine and projective spaces
over arbitrary fields. We next present a detailed account of the resultant
of a pair of univariate polynomials, and present efficient parallel algorithms
for its computation. The theory of subresultants is developed in
detail, and the computation of polynomial remainder sequences is derived.
A resultant system for several univariate polynomials and algorithms for
the gcd of several polynomials are given. Finally, we develop the theory
of multivariate resultants as a natural extension of the univariate case.
Here we treat both classical results on the projective (homogeneous) case,
as well as more recent results on the affine (inhomogeneous) case. The
u-resultant
of a set of multivariate polynomials is defined and a parallel algorithm
is presented. We discuss the computation of generalized characteristic
polynomials and relate them to the decision problem for the theories
of real closed and algebraically closed fields.
Rabin conditions are a general class
of properties of infinite sequences that encompass most known automata-theoretic
acceptance conditions and notions of fairness. In this paper, we introduce
a concept, called a Rabin measure, which in a precise sense expresses
progress for each transition towards satisfaction of the Rabin condition.
We show that these measures of progress are linked to the Kleene-Brouwer
ordering of recursion theory. Klarlund uses this property to establish
an exponential upper bound for the complementation of automata on infinite
trees. When applied to termination problems under fairness constraints,
Rabin measures eliminate the need for syntax-dependent, recursive applications
of proof rules.
An elementary proof of the Myhill-Nerode Theorem
for term automata.
Let T be the set of ground terms over a finite
ranked alphabet. We define partial automata on T and prove that
the finitely generated congruences on T are in one-to-one correspondence
(up to isomorphism) with the finite partial automata on T with no inaccessible
and no inessential states. We give an application in term rewriting: every
ground term rewrite system has a canonical equivalent system that can be
constructed in polynomial time.
An undergraduate text on automata and
computability theory.
This paper addresses the problem of sending
an encoded video stream over a channel of limited bandwidth. When there
is insufficient bandwidth available, some data must be dropped. For many
video encodings, some data are more important that others, leading to a
natural prioritization of the data. In this paper we give fast algorithms
to determine a prioritization which optimizes the visual quality of the
received data. By optimized visual quality, we mean that the expected
maximum interval of unplayable frames is minimized.
Our results are obtained in a model of encoded
video data that is applicable to many encoding technologies. The highlight
of the model is an interesting relationship between the play order and
dependence order of frames. The property allows fast determination of optimal
send orders by dynamic programming and is satisfied by all MPEG sequences.
Laszlo Lovasz recently posed the following
problem: ``Is there an NC algorithm for testing if a given graph has a
unique perfect matching?'' We present such an algorithm for bipartite graphs.
We also give NC algorithms for obtaining a transitive orientation of a
comparability graph, and an interval representation of an interval graph.
These enable us to obtain an NC algorithm for finding a maximum matching
in an incomparability graph.
The change-making problem is the problem
of representing a given value with the fewest coins possible. We investigate
the problem of determining whether the greedy algorithm produces an optimal
representation of all amounts for a given set of coin denominations 1 =
c1 < c2 < ... < cm. Chang and
Gill show that if the greedy algorithm is not always optimal, then there
exists a counterexample x in the range
c3 ≤ x < (cm(cmcm-1 + cm - 3c{m-1}) / (cm - cm-1) .To test for the existence of such a counterexample, Chang and Gill propose computing and comparing the greedy and optimal representations of all x in this range. In this paper we show that if a counterexample exists, then the smallest one lies in the range
c3 + 1 < x < cm + cm-1 ,and these bounds are tight. Moreover, we give a simple test for the existence of a counterexample that does not require the calculation of optimal representations. In addition, we give a complete characterization of three-coin systems and an efficient algorithm for all systems with a fixed number of coins. Finally, we show that a related problem is coNP-complete.
A graduate-level textbook on algorithms.
A theory satisfies the k-variable propertyif
every first-order formula is equivalent to a formula with at most k bound
variables (possibly reused). Gabbay has shown that a model of temporal
logic satisfies the k-variable property for some k if and only if there
exists a finite basis for the temporal connectives over that model. We
give a model-theoretic method for establishing the k-variable property,
involving a restricted Ehrenfeucht-Fraisse game in which each player has
only k pebbles. We use the method to unify and simplify results in the
literature for linear orders. We also establish new k-variable properties
for various theories of bounded-degree trees, and in each case obtain tight
upper and lower bounds on k. This gives the first finite basis theorems
for branching-time models of temporal logic.
An interleaver is a hardware device
commonly used in conjunction with error correcting codes to counteract
the effect of burst errors. Interleavers are in widespread use and much
is known about them from an engineering standpoint. In this paper we propose
a mathematical model that provides a rigorous foundation for the theoretical
study of interleavers. The model captures precisely such notions as block
and convolutional interleavers, spread, periodicity, causality, latency,
and memory usage. Using this model, we derive several optimality results
on the latency and memory usage of interleavers. We describe a family of
block interleavers and show that they are optimal with respect to latency
among all block interleavers with a given spread. We also give tight upper
and lower bounds on the memory requirements of interleavers.
Security of mobile code is a major issue in
today's global computing environment. When you download a program
from an untrusted source, how can you be sure it will not do something
undesirable? In this paper I will discuss a particular approach to
this problem called
language-based security. In this approach,
security information is derived from a program written in a high-level
language during the compilation process and is included in the compiled
object. This extra security information can take the form of a formal
proof, a type annotation, or some other form of certificate or annotation.
It can be downloaded along with the object code and automatically verified
before running the code locally, giving some assurance against certain
types of failure or unauthorized activity. The verifier must be trusted,
but the compiler, code, and certificate need not be. Java bytecode
verification is an example of this approach. I will give an overview
of some recent work in this area, including a particular effort in which
we are trying to make the production of certificates and the verification
as efficient and invisible as possible.
We introduce a simple and efficient approach
to the certification of compiled code. We ensure a basic but nontrivial
level of code safety, including control flow safety, memory safety, and
stack safety. The system is designed to be simple, efficient, and (most
importantly) relatively painless to incorporate into existing compilers.
Although less expressive than the proof carrying code of Necula
and Lee or typed assembly language of Morrisett et al., our certificates
are by comparison extremely compact and relatively easy to produce and
to verify. Unlike JAVA byte code, our system operates at the level of native
code; it is not interpreted and no further compilation is necessary.