PVS 3.2 contains a number of enhancements and bug fixes.

Installation Notes

Installation is the same as usual. However, if you have received patches from SRI that you have put into your ~/.pvs.lisp file, they should be removed. If you anticipate wanting to try the newer and older versions together, you can do this by using #-pvs3.2 in front of the patches. This is a directive to the Lisp reader, and causes the following s-expression to be ignored unless it is an earlier version of PVS.

New Features

Startup Script Update

The PVS startup script pvs has been made to work with later versions of Linux (i.e., RedHat 9 and Enterprise).

Theory Interpretation Enhancements

There are a number of changes related to theory interpretations, as well as many bug fixes.

There is now a new form of mapping that makes it simpler to systematically interpret theories. This is the Theory View, and it allows names to be associated without having to directly list them. For example, given a theory of timed automaton:

  automaton:THEORY	
  BEGIN
    actions: TYPE+;
    visible(a:actions):bool;
    states: TYPE+;
    enabled(a:actions, s:states): bool;
    trans(a:actions, s:states):states;
    equivalent(a1, s2:states):bool;
    reachable(s:states):bool
    start(s:states):bool;
  END automaton

One can create a machine with definitions for actions, etc., and create the corresponding interpretation simply by typing

    IMPORTING automaton :-> machine

This is read as a machine viewed as an automaton, and is equivalent to

    IMPORTING machine
    IMPORTING automaton {{ actions := machine.actions, ... }}

Here the theory view was in an importing, but it is really a theory name, and hence may be used as part of any name. However, the implicit importing of the target is done only for theory declarations and importings. In all other cases, the instance needed must already be imported. Thus it is an error to reference

  automaton :-> machine.start(s)

unless machine has already been imported. This is not very readable,(1) so it is best to introduce a theory abbreviation:

  IMPORTING automaton :-> machine AS M1a

or a theory declaration:

  M1t: THEORY = automaton :-> machine

The difference is that M1a is just an abbreviation for an instance of an existing theory, whereas M1t is a new copy of that theory, that introduces new entities. Thus consider

  IMPORTING automaton :-> machine AS M2a
  M2t: THEORY = automaton :-> machine

The formula M1a.actions = M2a.actions is type correct, and trivially true, whereas M1t.actions = M2t.actions is not even type correct, as there are two separate actions declarations involved, and each of those is distinct from machine.actions.

The grammar for Name and TheoryName has been changed to reflect the new syntax:

  TheoryName := [Id ''] Id [Actuals] [Mappings] [':->' TheoryName]
  
  Name := [Id ''] IdOp [Actuals] [Mappings] [':->' TheoryName] ['.' IdOp]

The left side of :-> is called the source, and the right side is called the target. Note that in this case the target provides a refinement for the source.

For a given theory view, names are matched as follows. The uninterpreted types and constants of the target are collected, and matched to the types and constants of the source. Partial matching is allowed, though it is an error if nothing matches. After finding the matches, the mapping is created and typechecked.

References to Mapped Entities

Mapping an entity typically means that it is not accessible in the context. For example, one may have

  IMPORTING T{{x := e}} AS T1

where the e is an expression of the current context. The x, having been mapped, is not available, but it is easy to forget this and one is often tempted to refer to T1.x. One possible work-around is to use theory declarations with = in place of :=, but then a new copy of T will be created, which may not be desirable (or in some cases even possible - see the Theory Interpretations Report ).

To make mappings more convenient, such references are now allowed. Thus in a name of the form T1.x, x is first looked for in T1 in the usual way, but if a compatible x cannot be found, and T1 has mappings, then x is searched for in the left sides, and treated as a macro for the right side if found. Note that x by itself cannot be referenced in this way; the theory name must be included.

Cleaning up Specifications

Developing specifications and proofs often leads to the creation of definitions and lemmas that turn out not to be necessary for the proof of the properties of interest. This results in specifications that are difficult to read. Removing the unneeded declarations is not easy, as it is difficult to know whether they are actually used or not.

The new commands unusedby-proof-of-formula and unusedby-proofs-of-formulas facilitate this. The unusedby-proof-of-formula command creates a 'Browse' buffer listing all the declarations that are unused in the proof of the given formula. Removing all these declarations and those that follow the given formula should give a theory that typechecks and for which the proofchain is still complete, if it was in the full theory. This could be done automatically in the future.

Binary Files

PVS specifications are saved as binary (.bin) files, in order to make restarting the system faster. Unfortunately, it often turned out that loading them caused problems. This was handled by simply catching any errors, and simply retypechecking. Thus in many cases the binary files actually made things slower.

Until PVS version 3.2, binary files corresponded to the specification files. This means that if there is a circularity in the files (i.e., theories A and C are in one file, B in another, with A importing B importing C) then there is no way to load these files. In 3.2, bin files correspond to theories. These are kept in a pvsbin subdirectory of the current context.

However, there was a more serious problem with the binary files. It turns out that loading a binary file took more space, and the proofs took longer to run. The reason for this is that the shared structure that is created when typechecking sources is mostly lost when loading binary files. Only the structure shared within a given specification file was actually shared. In particular, types are kept in canonical form, and when shared, testing if two types are equal or compatible is much faster.

The binary files are now saved in a way that allows the shared structure to be regained. In fact, there is now more sharing than obtained by typechecking. This is one of the main reasons that this release took so long, as this forced many new invariants on the typechecker.

The payoff is that, in general, binary files load around five times faster than typechecking them, and proofs run a little faster because of the increased sharing. This is based on only a few samples, in the future we plan on systematically timing the specifications in our validation suite.

Generating HTML

The commands html-pvs-file and html-pvs-files generate HTML for PVS specification files. These can be generated in place, or in a specified web location. This is driven by setting a Lisp variable *pvs-url-mapping*, as described below.

The in place version creates a pvshtml subdirectory for each context and writes HTML files there. This is done by copying the PVS file, and adding link information so that comments and whitespace are preserved. Note that there is no html-theory command. This is not an oversight; in creating the HTML file links are created to point to the declarations of external HTML files. Hence if there was as way to generate HTML corresponding to both theory and PVS file, it would be difficult to decide which a link should refer to.

HTML files can be generated in any order, and may point to library files and the prelude. Of course, if these files do not exist then following these links will produce a browser error. The html-pvs-files command will attempt to create all files that are linked to, failure is generally due to write permission problems.

Usually it is desirable to put the HTML files someplace where anybody on the web can see them, in which case you should set the *pvs-url-mapping* variable. It's probably best to put this in your ~/.pvs.lisp file in your home directory so that it is consistently used. This should be set to a list value, as in the following example.

  (setq *pvs-url-mapping*
        '("http://www.csl.sri.com/~owre/"
          "/homes/owre/public_html/"
          ("/homes/owre/pvs-specs" "pvs-specs" "pvs-specs")
          ("/homes/owre/pvs3.2" "pvs-specs/pvs3.2" "pvs-specs/pvs3.2")
          ("/homes/owre/pvs-validation/3.2/libraries/LaRC/lib"
           "pvs-specs/validation/nasa"
           "pvs-specs/validation/nasa")))

The first element of this list forms the base URL, and is used to create a <base> element in each file. The second element is the actual directory associated with this URL, and is where the html-pvs-file commands put the generated files. The rest of the list is composed of lists of three elements: a specification directory, a (possibly relative) URL, and a (possibly relative) HTML directory. In the above example, the base URL is http://www.csl.sri.com/~owre/, which the server associates with /homes/owre/public_html. The next entry says that specs found in (a subdirectory of) /homes/owre/pvs-specs are to have relative URLs corresponding to pvs-specs, and relative subdirectories similarly. Thus a specification in /homes/owre/pvs-specs/tests/conversions/ will have a corresponding HTML file in /homes/owre/public_html/pvs-specs/test/conversions/ and correspond to the URL http://www.csl.sri.com/~owre/pvs-specs/test/conversions/. In this case, PVS is installed in /homes/owre/pvs3.2, and thus references to the prelude and distributed libraries (such as finite sets), will be mapped as well. Note that in this example, all the relative structures are the same, but it doesn't have to be that way.

The *pvs-url-mapping* is checked to see that the directories all exist, though currently no URLs are checked (if anybody knows a nice way to do this from Lisp, please let us know). If a subdirectory is missing, the system will prompt you for each subdirectory before creating it. A n or q answer terminates processing without creating the directory, a y creates the directory and continues, and a ! causes it to just create any needed directories without further questions.

If a *pvs-url-mapping* is given, it must be complete for the file specified in the html-pvs-file command. In practice, this means that your PVS distribution must be mapped as well. PVS will complain if it is not complete; in which case simply add more information to the *pvs-url-mapping* list.

No matter which version is used, the generated HTML (actually XHTML) file contains a number of <span> elements. These simply provide a way to add class attributes, which can then be used in Cascading Style Sheet (CSS) files to define fonts, colors, etc. The classes currently supported are:

span.comment
span.theory
span.datatype
span.codatatype
span.type-declaration
span.formal-declaration
span.library-declaration
span.theory-declaration
span.theory-abbreviation-declaration
span.variable-declaration
span.macro-declaration
span.recursive-declaration
span.inductive-declaration
span.coinductive-declaration
span.constant-declaration
span.assuming-declaration
span.tcc-declaration
span.formula-declaration
span.judgement-declaration
span.conversion-declaration
span.auto-rewrite-declaration

See the <PVS>/lib/pvs-style.css file for examples. This file is automatically copied to the base directory if it doesn't already exist, and it is referenced in the generated HTML files. Most browsers underline links, which can make some operators difficult to read, so this file also suppresses underlines. This file may be edited to suit your own taste or conventions.

Both the html-pvs-file commands take an optional argument. Without it, many of the common prelude operators are not linked to. With the argument all operators get a link. Overloaded operators not from the prelude still get links.

Default Strategies

There is now a default-strategy that is used by the prover for the prove-using-default commands, and may be used as a parameter in pvs-strategies files. For example, the pvs-strategies file in the home directory may reference this, which is set to different values in different contexts.

Better handling of TCCs in Proofs

While in the prover, the typechecker now checks the sequent to see if the given expression needs to have a TCC generated. It does this by examining the formulas of the sequent, to see if the given expression occurs at the top level, or in a position from which an unguarded TCC would be generated. Thus if 1/x appears in the sequent in an equation y = 1/x, the TCC x /= 0 will not be generated. But if the expression only appears in a guarded formula, for example, x = 0 IMPLIES y = 1/x, then the TCC will still be generated.

This is sound, because for the expression to appear in the sequent necessary TCCs must already have been generated. This greatly simplifies proofs where annoying TCCs pop up over and over, and where the judgment mechanism is too restrictive (for example, judgements cannot currently state that x * x >= 0 for any real x).

Obviously, this could affect existing proofs, though it generally makes them much simpler.

typepred! rule and all-typepreds strategy

Any given term in the sequent may have associated implicit type constraints. When a term is first introduced to a sequent there may be TCCs associated, either on the formula itself, or as new branches in the proof. The term may subsequently be rewritten, but there is still associated with the term an implicit TCC. For example, the term 1/f(x) may be introduced, and later simplified to 1/(x * x - 1). Since f(x) was known to be nonzero, it follows that x * x - 1 is also nonzero (in this context), though this is not reflected in the types or judgements.

The typepred! rule has been modified to take a :implicit-typepreds? argument, which looks for occurrences of the given expression in the sequent, and creates the implicit type constraint (if any) as a hypothesis. It does this only for occurrences that are unguarded, i.e., occur positively. This is stricter than the way TCCs are actually generated. This is needed because, for example, conjunction is commutative, and can be rewritten in the prover. Thus the hypothesis x /= 0 => 1/x /= x could be rewritten to 1/x = x => x = 0, and the left-to-right reading will generate x /= 0, which is obviously unsound. Note that this does not mean that TCC generation or applying the rewrite is unsound, as the TCC simply says that a type can be assigned to the term. Technically, a TCC for a term of the form A => B could be a disjunction (A => TCC(B)) OR (NOT B => TCC(A)), but this is more costly in many ways, and rarely useful in practice.

Thus the command (typepred! "x * x - 1" :implicit-typepreds? t) generates the hypothesis x * x - 1 /= 0 assuming that the term occurs positively in a denominator.

A generally more useful strategy is all-typepreds. This collects the implicit type constraints for each subexpression of the specified formula numbers. This can be especially handy for automating proofs, though there is the potential of creating a lot of irrelevant hypotheses.

grind-with-ext and reduce-with-ext

There are two new prover commands: grind-with-ext and reduce-with-ext. These are essentially the same as grind and reduce, but also perform extensionality. This is especially useful when reasoning about sets.

New forward chain commands

There are new forward chain commands available: forward-chain@, forward-chain*, and forward-chain-theory. forward-chain@ takes a list of forward-chaining lemmas (of the form A1 & ... & An => B, where free variables in B occur among the free variables in the Ai), and attempts the forward-chain rule until the first one succeeds. forward-chain* takes a list, and repeatedly forward-chains until there is no change; when successful it starts back at the beginning of the list. forward-chain-theory creates a list of the applicable lemmas of the given theory and invokes forward-chain*.

TeX Substitutions

TeX substitutions have been improved, allowing substitutions to be made for various delimiters, as shown below. The TeX commands are defined in the pvs.sty file at the top level of the PVS directory. They consist of the prefix, followed by 'l' or 'r' to indicate the left or right delimiter. Prefix
Name Symbols TeX Command TeX
parentheses ( ) \pvsparen
brackets [ ] \pvsbracket
record type constructors [# #] \pvsrectype
bracket bar [| |] \pvsbrackvbar
parenthesis bar (| |) \pvsparenvbar
brace bar {| |} \pvsbracevbar
list constructor (: :) \pvslist
record constructor (# #) \pvsrecexpr

These can be customized either by including new mappings for the symbols in a pvs-tex.sub file, or by overriding the TeX commands in your LaTeX file. It may be useful to look at the default pvs.sty and pvs-tex.sub files; both are located in the top level of the PVS installation (provided by M-x whereis-pvs).

add-declaration and IMPORTINGs

The add-declaration command now allows IMPORTINGs. This is most useful during a proof when a desired lemma is in a theory that has not been imported. Note that it is possible for the file to no longer typecheck due to ambiguities after this, even though the proof will go through just fine. Such errors are typically very easy to repair.

Prelude additions

Although no new theories have been added, there are a number of new declarations, mostly lemmas. These are in the theories sets, function_inverse, relation_defs, naturalnumbers, reals, floor_ceil, exponentiation, and finite_sets.

The bv_cnv theory was removed, as the conversion can sometimes hide real type errors. To enable it, just add the following line to your specification.

  CONVERSION fill[1]

Bug Fixes

The PVS Bugs List shows the status of reported bugs. Not all of these have been fixed as of PVS version 3.2. Those marked feedback or closed are the ones that have been fixed. The more significant bug fixes are described in the following subsections.

Retypechecking

PVS specifications often span many files, with complex dependencies. The typechecker is lazy, so that only those theories affected by a change will need to be retypechecked. In addition, not all changes require retypechecking. In particular, adding comments or whitespace will cause the typechecker to reparse and compare the theories to see if there was a real change. If not, then the place information is updated and nothing needs to be retypechecked. Otherwise, any theory that depends on the changed theory must be untypechecked. This means that the typechecker cannot decide if something needs to be untypechecked until it actually reparses the file that was modified.

Thus when a file is retypechecked, it essentially skips typechecking declarations until it reaches an importing, at which point it retypechecks that theory. When it reaches a theory that has actually changed, untypechecking is triggered for all theories that import the changed theory. The bug was that only the top level theory was untypechecked correctly; any others would be fully untypechecked, but since they were already in the process of being typechecked, earlier declarations would no longer be valid.

The fix is to keep a stack of the theories being typechecked and the importing they are processing, and when a change is needed, the theories are only untypechecked after the importing.

Quantifier Simplification

In PVS 3.1, a form of quantifier simplification was added, so that forms such as FORALL x: x = n IMPLIES p(x) were automatically simplified to p(n). In most cases, this is very useful, but there are situations where the quantified form is preferable, either to trigger forms of auto-rewriting or to allow induction to be used.

Many proof commands now include a :quant-simp? flag to control this behavior. By default, quantifier simplification is not done; setting the flag to t allows the simplification.

simplify, assert, bash, reduce, smash, grind, ground, lazy-grind, crush, and reduce-ext all have this flag.

Incompatibilities

Ground Decision Procedure Completeness

The decision procedures have been made more complete, which means that some proofs may finish sooner. Unfortunately, some proofs may also loop that didn't before (2). This is usually due to division, and a workaround is to use the name-replace command to replace the term involving division with a new name, and then using the decision procedure (e.g., assert). If you find that the prover is taking too long, you can interrupt it with C-c C-c, and run :bt to see the backtrace. If it shows something like the following, then you know you are in the ground decision procedure.

 find1 <-
   pr-find <- chainineqs <- transclosure <- addineq <- process1 <-
   ineqsolve <- arithsolve <- solve <- pr-merge <- process1 <-
   ineqsolve <- arithsolve <- solve

At this point, you can either run (restore) to try a different command (like name-replace), or :cont in the hope that it will terminate with a little more time. And yes, there are situations where the bug is not a problem, it just takes a long time to finish.

Actuals not allowed for Current Theory

In the past, a name could reference the actuals of the current theory. This is actually a mistake, as the actuals were generally ignored in this case. Though this rarely caused problems, there were a few reported bugs that were directly due to this, so now the system will report that the actuals are not allowed. To fix this, simply remove the actual parameters. Note that this can affect both specifications and proofs.

Referencing Library Theories

In earlier versions of PVS, once a library theory was typechecked, it could be referenced without including the library id. This is no longer valid. First of all, if the given theory appears in two different libraries, it is ambiguous. Worse, if it also appears in the current context, there is no way to disambiguate. Finally, even if there is no ambiguity at all, there can still be a problem. Consider the following:

A: THEORY ... IMPORTING B, C ... END A

B: THEORY ... IMPORTING lib@D ... END B

C: THEORY ... IMPORTING D ... END C

This typechecks fine in earlier versions of PVS, but if in the next session the user decides to typecheck C first, a type error is produced.

Renaming of Bound Variables

This has been improved, so that variables are generally named apart. In some cases, this leads to proofs failing for obvious reasons (an inst variable does not exist, or a skolem constant has a different name).

bddsimp and Enumeration Types

Fixed bddsimp to return nicer formulas when enumeration types are involved. These are translated when input to the BDD package, but the output was untranslated. For example, if the enumeration type is {a, b, c}, the resulting sequents could have the form

  a?(x)        b?(x)
  |----        |----        |----
               a?(x)        b?(x)
                            a?(x)

With this change, instead one gets

  a?(x)        b?(x)        c?(x)
  |----        |----        |----
  

Which is nicer, and matches what is returned by prop. This makes certain proofs faster, because they can use the positive information, rather than the long and irrelevant negative information. Of course, the different formula numbering can affect existing proofs.

Prettyprinting Theory Instances

The prettyprint-theory-instance command was introduced along with theory interpretations, but it was restricted to theory instances that came from theory declarations, and would simply prettyprint these. Unfortunately, such theories are very restricted, as they may not refer to any local declarations. The prettyprint-theory-instance now allows any theory instance to be given, and displays the theory with actuals and mappings performed. This is not a real theory, just a convenient way of looking at all the parts of the theory instance.

Assuming and Mapped Axiom TCC Visibility Rules

The visibility rules for assumings and mapped axioms has been modified. Most TCCs are generated so that the entity that generated them is not visible in a proof. This is done simply by inserting the TCCs before the generating declaration. Assuming and Mapped Axiom TCCs are a little different, in that they may legitimately refer to declarations that precede them in the imported theory. To handle this, these TCCs are treated specially when creating the context. All declarations preceding the assuming or axiom that generated the TCC are visible in the proof of the TCC.

Replacing actuals including types

The replace prover command now does the replacement in types as well as expressions when the :actuals? flag is set. It is possible, though unlikely, that this could cause current proofs to fail. It is more likely that branches will be proved sooner.

expand Rule uses Full Name

When the expand rule was given a full name it would ignore everything but the id. This has been fixed, so that other information is also used. For this command, the name is treated as a pattern, and any unspecified part of the name is treated as matching anything. Thus th.foo will match foo only if it is from theory th, but will match any instance or mapping of th. foo[int] will match any occurrence of foo of any theory, as long as it has a single parameter matching int. The occurrence number counts only the matching instances.

This change is only going to affect proofs in which more than just an identifier is given to expand.

finite_sets min and max renamed

In theory finite_sets_minmax the functions min and max defined on the type parameter have been renamed to fsmin and fsmax, respectively. This was done because they are only used in the definitions of min and max over finite sets, and can cause ambiguities elsewhere.

induct no longer beta-reduces everything

There was a bug reported where induct was generating a large number of subgoals; this turned out to be due to the indiscriminate use of beta, which was intended to simplify newly added formulas but could also affect the conclusion and subsequent processing. To fix this, beta is now only applied to newly generated formulas. This may make some proofs fail, though generally they will be fixed simply by using beta after induct.