# [PVS] Theorem instances involving bound variables

```Hello all,

I have run into problems with theorem instances that involve bound
variables. I've tried to distill the problem below.

----------------------------------------------------------------------
th1[n : nat] : THEORY
BEGIN
ASSUMING
n_asm : ASSUMPTION n > 1 and (odd?(n) or even?(n))
ENDASSUMING
f(x : nat) : nat = x + n
f_TH : THEOREM forall (m : nat) : f(m) > 1
END th1

use : THEORY
BEGIN
IMPORTING th1
g(k : subrange(2,10)) : [nat -> nat] = f[k]
g_TH : THEOREM forall (k : subrange(2,10), l : nat) : g(k)(l) > 1
END use
----------------------------------------------------------------------

The first theory includes a function f and a theorem f_TH involving f.
In the second theory, function g is defined using f[k], where k is an
argument of g. This generates a TCC to show that n_asm is true for k.

The problem arises when using f_TH to prove g_TH. Skolemization and
expansion of definitions yields the following sequent:

[-1]  2 <= k!1
[-2]  k!1 <= 10
[-3]  l!1 >= 0
|-------
{1}   f[k!1](l!1) > 1

The prover command (lemma "f_TH[k!1]") proves the sequent, but
produces a TCC generated from n_asm for k!1. It is obviously provable,
since it is essentially the same TCC that type checking generates.
My problem is that in the real specification where this crops up,
theories are parameterized with big lambda terms, and proving TCCs is
nontrivial.

Is there any way to refer to the unnamed theory instance th1[k!1] in
proofs to avoid the duplicate TCCs? Or alternatively, is it possible
to name the theory instance?

A pvs dump file containing the theory above and the proofs is at
<http://www.cs.tut.fi/~pk/tmp/inst.dump>.
--
pertti

```