Theory Phi_Logic_Programming_Reasoner.Phi_Logic_Programming_Reasoner
theory Phi_Logic_Programming_Reasoner
imports Main "HOL-Eisbach.Eisbach" "HOL-Eisbach.Eisbach_Tools" "Phi_Document.Base"
keywords "except" "@action" :: quasi_command
and "φreasoner" "φreasoner_ML" :: thy_decl % "ML"
and "print_φreasoners" :: diag
abbrevs
"<premise>" = "𝗉𝗋𝖾𝗆𝗂𝗌𝖾"
and "<simprem>" = "𝗌𝗂𝗆𝗉𝗋𝖾𝗆"
and "<@GOAL>" = "❙@❙G❙O❙A❙L"
and "<threshold>" = "𝗍𝗁𝗋𝖾𝗌𝗁𝗈𝗅𝖽"
begin
subsubsection ‹Prelude Settings›
ML ‹Timing.cond_timeit false "asd" (fn () => OS.Process.sleep (seconds 1.0))›
ML_file ‹library/pattern.ML›
ML_file ‹library/helpers.ML›
ML_file ‹library/handlers.ML›
ML_file ‹library/pattern_translation.ML›
ML_file_debug ‹library/tools/simpset.ML›
definition 𝗋Require :: ‹prop ⇒ prop› ("𝗋REQUIRE _" [2] 2) where [iff]: ‹𝗋Require X ≡ X›
typedecl action
definition Action_Tag :: ‹prop ⇒ action ⇒ prop› ("_ @action _" [3,4] 3)
where [iff]: ‹Action_Tag P A ≡ P›
lemma Action_Tag_I:
‹P ⟹ P @action A›
unfolding Action_Tag_def .
ML_file_debug ‹library/reasoner.ML›
lemma 𝗋Require_I[φreason 1000]: ‹PROP P ⟹ PROP 𝗋Require P› unfolding 𝗋Require_def .
section ‹Introduction›
text ‹
φ-Logic Programming Reasoner is a extensible reasoning engine
based on logic programming like Prolog.
It allows arbitrary user reasoners to be integrated freely, and applies them selectively
by matching the pattern of the goals.
The reasoning is a depth-first heuristic search guided by ∗‹priority› of each branch.
A reasoning state is represented by a ∗‹pair› of ▩‹Proof.context› and a sequent, of type
▩‹Proof.context * thm›.
Search branches on a reasoning state are admissible reasoners on the sequent.
A reasoner is admissible on a sequent if the sequent matches the pattern of the reasoner
(cf. patterns in \cref{sec:patterns}).
The reasoning accepts several reasoning states, and outputs ∗‹one› reasoning state which is the
first one satisfies the termination condition, ∗‹or› none if every search branches fail.
The priorities of rules demonstrate which rules are better among admissible reasoners.
The priority makes sense only locally, among all admissible reasoners on a reasoning state.
The accumulation of priority values (i.e. the sum of the priority of all applied reasoners) of a
reasoning state is meaningless and merely for debug-usage.
Because it is a DFS, the first reached result is the optimal one w.r.t each search branches in a
greedy sense. (the global maximum is senseless here because the priority accumulation is
meaningless).
The sequent of the reasoning state is a Harrop Formula (HF), e.g.,
\[ ‹Antecedent1 ⟹ Antecedent2 ⟹ Conclusion›, \]
where antecedents represent sub-goals that have to be reasoned \textit{in order}.
The \xphi-LPR engine reasons antecedents in order, invoking the reasoners that match the pattern
of the leading antecedent best (cf. Priority).
An antecedent can be augmented by conditions that can be utilized during the reasoning.
It can also be universally quantified.
\[ ‹(⋀x. P1 x ⟹ P2 x ⟹ Conclusion_of_Antecedent1 x) ⟹ A2 ⟹ C› \]
A typically reasoner is to deduce the conclusion of the antecedent by applying an introduction
rule like ‹A11 x ⟹ A12 x ⟹ Conclusion_of_Antecedent1 x›, resulting in
\[ ‹(⋀x. P1 x ⟹ P2 x ⟹ A11 x) ⟹ (⋀x. P11 x ⟹ P12 x ⟹ A12 x) ⟹ A2 ⟹ C›. \]
Then, the engine reasons the currently heading antecedent ‹(⋀x. P1 x ⟹ P2 x ⟹ A11 x)›
recursively. The antecedent list of a reasoning state resembles a calling stack of usual programs.
From this perspective, the introduction rule of \<^prop>‹Antecedent1› invokes two 'sub-routines'
(or the reasoners of) \<^prop>‹A11› and \<^prop>‹A22›.
›
section ‹The Engine \& The Concepts›
text ‹
The engine is implemented in ▩‹library/reasoner.ML›.
▩‹structure Phi_Reasoner = struct
(*Reasoning state*)
type context_state = Proof.context * thm
type name = term (* the name as a term is just for pretty printing *)
val pattern_on_conclusion : term -> pattern
val pattern_on_condition : term -> pattern
(*A reasoner is a quintuple*)
type reasoner = {
name: name,
pos: Position.T,
pattern: pattern list,
blacklist: pattern list,
tactic: context_state -> context_state Seq.seq
}
type priority = int
val add : priority * reasoner -> Context.generic -> Context.generic
val del : name -> Context.generic -> Context.generic
val reason : context_state -> context_state option
val auto_level : int Config.T
exception Success of context_state
exception Global_Cut of context_state
...
end
››
paragraph ‹Patterns \label{sec:patterns}›
text ‹
The ❙‹pattern› and the ❙‹blacklist› stipulate the range in which a reasoner will be invoked.
A reasoner is invoked iff the antecedent matches at least one pattern in the pattern list and none
in the blacklist.›
text ‹
There are two kinds of patterns, that match on conclusion and that on condition, constructed by
▩‹pattern_on_conclusion› and ▩‹pattern_on_conclusion› respectively.
›
text ‹❙‹Prefix ▩‹var››. A schematic variable in a pattern can have name prefix ▩‹var_›.
In this case, the variable only matches schematic variables.
∗‹Remark›: It is important to write schematic variables in patterns explicitly. The engine
does not convert any free variables to schematic variables implicitly.›
paragraph ‹Automatic Level› text ‹by ▩‹auto_level›
is a general configuration deciding whether the engine applies
some aggressive tactics that may consume considerable time or never terminate.
There are 3 levels:
▸[0]: the most safe, which may mean manual mode for some reasoner.
It does not exclude non-termination or blocking when some tactics are necessary for the
features. Method @{method simp} and @{method clarify} are acceptable on this level.
▸[1]: relatively safe automation, where aggressive tactics are forbidden but non-termination is
still possible. Method @{method auto} is forbidden in this level because it blocks too easily.
▸[2]: the most powerful automation, where no limitation is imposed on automation strategies.›
paragraph ‹Priority \label{sec:cut}›
text ‹
The reasoning is a depth-first search and every reasoner is registered with a priority deciding
the order of attempting the reasoners. Reasoners with higher priority are attempted first.
According to the priority of reasoners, reasoners fall into 3 sorts corresponding to
different pruning optimization strategy.
▸ When the priorities of the candidate reasoners on a certain reasoning state are all less than 1000,
the reasoning works in the normal behavior where it attempts the highest candidate and once fails
backtracks to the next candidate.
▸ When the highest priority of the candidates $\geq$ 1000 and $<$ than 1000,000,
this candidate becomes a ∗‹local cut›. The reasoning attempts only the local cut and if it fails,
no other candidates will be attempted, but the backtrack is still propagated to the upper layer
(of the search tree).
Any presence of a candidate with priority $\geq$ 1000, causes the reasoning (at this point)
is confident (in the sense that no alternative search branch will be attempted).
▸ When the highest priority of the candidates $\geq$ 100,000,
this candidate becomes a ∗‹global cut›, which forgets all the previous search history.
No backtrack will be propagated to the past before the global cut so it improves the performance.
Once the reasoning of the branch of the cut fails, the whole reasoning fails.
Reasoners of priority $\geq$ 1000 are named ∗‹confident reasoners› and others are
∗‹submissive reasoners›.
∗‹Remark›: a local cut reasoner can throw ▩‹Global_Cut s› to trigger a global cut with the
reasoning state ▩‹s›.
›
paragraph ‹Termination›
text ‹The reasoning terminates when:
▪ Any reasoning state has no antecedent any more or all its designated leading
antecedents are solved. This reasoning state is returned.
▪ Any reasoner throws ▩‹Success result›.
▪ All accessible search paths are traversed.
›
text ‹‹𝗋Success› is an antecedent that throws ▩‹Success›.
Therefore it remarks the reasoning is succeeded.
A typical usage of ‹𝗋Success› is shown in the following sequent,
\[ ‹A1 ⟹ A2 ⟹ 𝗋Success ⟹ P ⟹ Q› \]
which expresses the reasoning succeeds after solving \<^prop>‹A1›, \<^prop>‹A2›, and it outputs
result \<^prop>‹P ⟹ Q›.›
text ‹‹Pure.prop P› is helpful to protect remaining antecedents if you only want to reason
the beginning several antecedents instead of all antecedents, e.g.,
\[ ‹Solve_A1 ⟹ Pure.prop (Protect_A2 ⟹ C)› \]›
paragraph ‹Output›
text ‹The output reasoning state can be:
▪ The first traversed reasoning state that has no antecedent or all the designated leading
antecedents are solved.
▪ The ▩‹result› threw out by ▩‹Success result›.
›
text ‹If none of the above are reached during a reasoning process, the process returns nothing
(▩‹None› or ▩‹Seq.empty›).
The reasoning only outputs ∗‹milestone states› representing the problem is indeed solved partially
instead of any unfinished intermediate reasoning state.
Milestone states are explicitly annotated by user (e.g.,
by antecedent \<^prop>‹𝗋Success› or by setting the priority to 1000,000).
Any other intermediate reasoning state is not considered a successfully finished state
so that is not outputted.›
section ‹Provide User Reasoners \& Apply the Engine›
text ‹\xphi-LPR can be augmented by user reasoners.
The system predefines a resolution based reasoner using introducing rules and elimination rules.
Other arbitrary reasoners can also be built from tactics or ML code.›
subsection ‹Reasoning by Rules›
text ‹Attributes @{attribute_def φreason} is provided for introducing resolution rules.
\begin{matharray}{rcl}
@{attribute_def φreason} & : & ‹attribute›
\end{matharray}
\small
\<^rail>‹
@@{attribute φreason} (@{syntax add_rule} | 'add' @{syntax add_rule} | 'del')
;
@{syntax_def add_rule}: @{syntax priority}?
('for' @{syntax patterns})? ('except' @{syntax blacklist})?
;
@{syntax_def priority}: (@{syntax nat} | '!')
;
@{syntax_def patterns}: (() + @{syntax term})
;
@{syntax_def blacklist}: (() + @{syntax term})
›
\normalsize
➧ @{attribute φreason}~▩‹add› declares reasoning rules used in φ-LPR.
@{attribute φreason}~▩‹del› removes the reasoning rule.
If no keyword ▩‹add› or ▩‹del› is given, ▩‹add› is the default option.
➧ The @{syntax patterns} and @{syntax blacklist} are that described in \cref{sec:patterns}.
For introduction rules, the patterns and the blacklist match only the conclusion of the
leading antecedent; for elimination rules, they match only the conditions of the
leading antecedent.
Patterns can be omitted. For introduction rule, the default pattern is the conclusion
of the rule; for elimination rule, the default is the first premise.
➧ @{syntax priority} can be a natural number or, an exclamation mark denoting the priority of
1000,000, i.e., the minimal priority for a global cut.
If the priority is not given explicitly, by default it is 100.
›
text ‹∗‹Remark›: Rules of priority $\geq$ 1000 are named ∗‹confident rules› and others are
∗‹submissive rules›.›
text ‹∗‹Remark›: Attribute @{attribute φreason} can be used without any argument.
▩‹[[φreason]]› denotes ▩‹[[φreason add]]› exactly.
However, the usage of empty arguments is not recommended
due to technical reasons that in this case of empty argument
the attribute cannot get the position of the associated reasoning rule, and
this position is displayed in debug printing.›
paragraph ‹Example›
declare conjI[φreason add] TrueI[φreason 1000]
paragraph ‹‹𝗋›Feasible \label{sec:rFeasible}›
text ‹Cut rules including local cut and global cut are those of priority $\geq$ 1000.
A cut rule can have at most one special ‹𝗋Require› antecedent at the leading position,
which determines the condition of the rule to be applied, e.g. the following rule can be applied
only if ‹A1› and ‹A2› are solvable.
\[ ‹𝗋Require (A1 &&& A2) ⟹ A3 ⟹ C› \]
It provides a mechanism to constrain semantic conditions of applying the rule,
whereas the pattern matches mentioned earlier are only able to check the syntactical conditions.
›
subsection ‹Reasoners by Isar Methods and ML code›
text ‹
There are two commands defining reasoners, respectively by Eisbach expression and by ML code.
\begin{matharray}{rcl}
@{command_def φreasoner} & : & ‹local_theory → local_theory›\\
@{command_def φreasoner_ML} & : & ‹local_theory → local_theory›\\
\end{matharray}
\<^rail>‹
@@{command φreasoner} @{syntax name} @{syntax priority} @{syntax patterns'} '=' @{syntax Eisabach_method}
;
@@{command φreasoner_ML} @{syntax name} @{syntax priority} @{syntax patterns'} '=' @{syntax ML_code}
;
@{syntax_def patterns'}: '(' (@{syntax term} + '¦') ')'
›
➧ @{command φreasoner} defines a reasoner using an Eisabach expression. The Eisabach expression
defines a proof method in Isabelle/Isar and this proof method is invoked on the leading antecedent
as a sub-goal when @{syntax patterns'} match.
➧ @{command φreasoner_ML} defines a reasoner from ML code. The given code should be a ML function
of type ▩‹context_state -> context_state Seq.seq›, i.e., a contextual tactic.
›
subsection ‹Apply the Engine›
text ‹There are two ways to use the reasoning engine, from ML code by using ▩‹Phi_Reasoner.reason›,
and as a proof method.›
subsubsection ‹Proof Method›
text ‹
There are two commands defining reasoners, respectively by Eisbach expression and by ML code.
\begin{matharray}{rcl}
@{method_def φreason} & : & ‹method›\\
\end{matharray}
\<^rail>‹
@@{method φreason} ('add' @{syntax thms})? ('del' @{syntax thms})?
›
➧ @{method φreason}~▩‹add›~‹a›~▩‹del›~‹b›
applies φ-LPR on the proof state (which is a HHF sequent~\cite{isar-ref}).
It means subgoals of the proof are regarded as antecedents and φ-LPR reasons them one by one
in order.
Optional modifier ▩‹add›~‹a› adds introduction rules ‹a› temporarily with default patterns
(the conclusion of the rule) and default priority (100).
Modifier ▩‹del›~‹b› removes introductions rules ‹b› temporarily.
We do not provide modifiers to alter elimination rules now.
›
section ‹Predefined Antecedents, Reasoners, and Rules›
subsection ‹Auxiliary Structures›
subsubsection ‹Isomorphic Atomize›
text ‹The system ‹Object_Logic.atomize› and ‹Object_Logic.rulify› is not isomorphic in the sense
that for any given rule ‹R›, ‹Object_Logic.rulify (Object_Logic.atomize R)› does not exactly
equal ‹R›. The section gives a way addressing this issue.›
ML_file ‹library/iso_atomize.ML›
definition ‹pure_imp_embed ≡ (⟶)›
definition pure_all_embed :: ‹('a ⇒ bool) ⇒ bool› (binder ‹∀⇩e⇩m⇩b⇩e⇩d › 10)
where ‹pure_all_embed ≡ (All)›
definition ‹pure_conj_embed ≡ (∧)›
definition ‹pure_prop_embed x ≡ x›
lemma [iso_atomize_rules, symmetric, iso_rulify_rules]:
‹(P ⟹ Q) ≡ Trueprop (pure_imp_embed P Q)›
unfolding atomize_imp pure_imp_embed_def .
lemma [iso_atomize_rules, symmetric, iso_rulify_rules]:
‹(P &&& Q) ≡ Trueprop (pure_conj_embed P Q)›
unfolding atomize_conj pure_conj_embed_def .
lemma [iso_atomize_rules, symmetric, iso_rulify_rules]:
‹(⋀x. P x) ≡ Trueprop (pure_all_embed (λx. P x))›
unfolding atomize_all pure_all_embed_def .
lemma [iso_atomize_rules, symmetric, iso_rulify_rules]:
‹PROP Pure.prop (Trueprop P) ≡ Trueprop (pure_prop_embed P)›
unfolding Pure.prop_def pure_prop_embed_def .
subsubsection ‹Action›
text ‹In the reasoning, antecedents of the same form may have different purposes, e.g.,
antecedent ‹P = ?Q› may except a complete simplification or numeric calculation only or any other
specific conversion. Of different purposes, antecedents are expected to be processed by
different reasoners. To achieves this, because the engine selects reasoners by syntactic pattern,
this section proposes a general structure tagging the purpose of antecedents.
The purpose is denoted by ‹action› type, which is an unspecified type because it serves only for
syntactic purpose.›
text ‹
‹‹P @action A›› tags antecedent \<^prop>‹P› by the specific purpose denoted by \<^term>‹A›.
The type variable \<^typ>‹'category› enables to classify actions by types and type classes.
For example, some operation may be designed for any generic action ‹?act :: (?'ty::cls) action›
that fall into class ‹cls›.
∗‹Comment: I am thinking this category type variable is a bad design because the indexing
data structure (Net) we are using doesn't support type sort, causing this feature is actually
not indexed at all, causing the reasoning here becomes searching one by one in linear time!
Maybe classification should be done by some term-level structure. Let's think when have time!››
definition Action_Tag_embed :: ‹bool ⇒ action ⇒ bool›
where ‹Action_Tag_embed P A ≡ P›
lemma [iso_atomize_rules, symmetric, iso_rulify_rules]:
‹PROP Action_Tag (Trueprop P) A ≡ Trueprop (Action_Tag_embed P A)›
unfolding Action_Tag_def Action_Tag_embed_def .
lemma Action_Tag_D:
‹P @action A ⟹ P›
unfolding Action_Tag_def .
lemma Conv_Action_Tag_I:
‹X = X @action A›
unfolding Action_Tag_def ..
ML_file ‹library/action_tag.ML›
subsubsection ‹Mode›
text ‹Modes are general annotations used in various antecedents, which may configure
for the specific reasoning behavior among slight different options.
The exact meaning of them depend on the specific antecedent using them.
An example can be found in \cref{sec:proof-obligation}.›
type_synonym mode = action
text ‹We provide a serial of predefined modes, which may be commonly useful.›
consts default :: mode
consts MODE_SIMP :: mode
consts MODE_COLLECT :: mode
consts MODE_AUTO :: mode
subsection ‹General Rules›
text ‹❙‹Schematic variables› are able to be instantiated (assigned) by reasoners.
The instantiation of an schematic variable ‹?v› updates all the occurrences of ‹?v› in the
remaining sequent, and this instantion can be seen as assigning results of the execution of the
antecedent.
For example,
\[ ‹1 + 2 = ?result ⟹ Print ?result ⟹ Done› \]
the reasoning of antecedent ‹1 + 2 = ?result› instantiates ‹?result› to ‹3›, and results in
\[ ‹Print 3 ⟹ Done› \]
If view the antecedent as a program (sub-routine),
the schematic variables of the antecedent have a meaning of ∗‹output›,
and we name them ∗‹output variables›.
The following ‹Try› antecedent is a such example.›
subsubsection ‹Try›
definition Try :: ‹bool ⇒ bool ⇒ bool› where ‹Try success_or_fail P = P›
text ‹
The typical usage is ‹‹Try ?success_or_fail P››, where
‹P› should be an antecedent having some fallback reasoner (not given here),
and ‹?success_or_fail› is an output variable representing whether the ‹P› is successfully
deduced ∗‹without› using fallback.
A high priority (800) rule reasons ‹‹Try True P›› normally and set the output variable
‹success_or_fail› to be true.›
lemma [φreason 800 for ‹Try ?S ?P›]:
‹ P
⟹ Try True P›
unfolding Try_def .
text ‹
Users using ‹‹Try True P›› should provide the fallback rule for their own ‹P›.
It depends on the application scenario and there is not a general rule for fallback of course.
The fallback rule may has the following form,
\[ ‹ Fallback_of_P ⟹ Try False P › \]
›
subsubsection ‹Compact Representation of Antecedents›
text ‹Meta-programming is feasible on φ-LPR.
The reasoning of an antecedent may generate dynamically another antecedent, and assign it to
an output variable of type \<^typ>‹bool›.
When multiple antecedents are going to be generated, it is
more efficient to contract them into one antecedent using conjunctions (e.g. ‹A1 ∧ A2 ∧ A3 ∧ ⋯›),
so they can be represented by one output variable of type \<^typ>‹bool›.
‹(∧⇩r)› and ‹(∀⇩r)› are used to contract antecedents and embed universally quantified variables
respectively.
›
definition Compact_Antecedent :: ‹bool ⇒ bool ⇒ bool› (infixr "∧⇩𝗋" 35)
where [iff]: ‹Compact_Antecedent = (∧)›
definition Compact_Forall :: ‹('a ⇒ bool) ⇒ bool› (binder "∀⇩𝗋" 10)
where [iff]: ‹Compact_Forall = All›
text ‹Assertive rules are given to unfold the compression and reason the antecedents in order.›
lemma [φreason 1000]:
‹P ⟹ Q ⟹ P ∧⇩𝗋 Q›
unfolding Compact_Antecedent_def ..
lemma [φreason 1000]:
‹(⋀x. P x) ⟹ ∀⇩𝗋x. P x›
unfolding Compact_Forall_def ..
declare conjunctionI[φreason 1000]
subsubsection ‹Matches›
text ‹Antecedent \<^prop>‹Matches pattern term› asserts \<^term>‹pattern› matches \<^term>‹term›;
\<^prop>‹NO_MATCH pattern term› asserts \<^term>‹pattern› does not match \<^term>‹term›.›
definition Matches :: ‹'a ⇒ 'a ⇒ bool› where ‹Matches _ _ = True›
lemma Matches_I: ‹Matches pattern term› unfolding Matches_def ..
φreasoner_ML Matches 2000 (‹Matches ?pattern ?term›) =
‹fn (ctxt, sequent) =>
let
val (\<^const>‹Trueprop› $ (Const (\<^const_name>‹Matches›,_) $ pattern $ term))
= Thm.major_prem_of sequent
in
if Pattern.matches (Proof_Context.theory_of ctxt) (pattern, term)
then Seq.single (ctxt, @{thm Matches_I} RS sequent)
else Seq.empty
end›
lemma NO_MATCH_I: "NO_MATCH A B" unfolding NO_MATCH_def ..
φreasoner_ML NO_MATCH 0 ("NO_MATCH ?A ?B") = ‹
fn (ctxt,th) =>
let
val (\<^const>‹Trueprop› $ (Const (\<^const_name>‹NO_MATCH›, _) $ a $ b)) = Thm.major_prem_of th
in
if Pattern.matches (Proof_Context.theory_of ctxt) (a,b)
then Seq.empty
else Seq.single (ctxt, @{thm NO_MATCH_I} RS th)
end
›
subsubsection ‹Proof By Assumption›
definition By_Assumption :: ‹prop ⇒ prop› where ‹By_Assumption P ≡ P›
definition May_By_Assumption :: ‹prop ⇒ prop› where ‹May_By_Assumption P ≡ P›
lemma By_Assumption_I: ‹PROP P ⟹ PROP By_Assumption P› unfolding By_Assumption_def .
lemma May_By_Assumption_I: ‹PROP P ⟹ PROP May_By_Assumption P› unfolding May_By_Assumption_def .
φreasoner_ML By_Assumption 1000 (‹PROP By_Assumption _›) = ‹fn (ctxt,sequent) =>
HEADGOAL (Tactic.assume_tac ctxt) (@{thm By_Assumption_I} RS sequent)
|> Seq.map (pair ctxt)
›
φreasoner_ML May_By_Assumption 1000 (‹PROP May_By_Assumption _›) = ‹fn (ctxt,sequent) =>
let val sequent' = @{thm May_By_Assumption_I} RS sequent
in (HEADGOAL (Tactic.assume_tac ctxt) ORELSE Seq.single) sequent'
|> Seq.map (pair ctxt)
end
›
subsection ‹Cut›
text ‹The cuts have been introduced in \cref{sec:cut}.
Antecedent ‹𝗋Cut› triggers a global cut.
›
definition 𝗋Cut :: bool where ‹𝗋Cut = True›
lemma [iff, φreason 1000000]: ‹𝗋Cut› unfolding 𝗋Cut_def ..
text ‹Antecedent ‹𝗋Success› terminates the reasoning successfully with the reasoning state as
the result.›
definition 𝗋Success :: bool where ‹𝗋Success = True›
lemma 𝗋Success_I[iff]: ‹𝗋Success› unfolding 𝗋Success_def ..
φreasoner_ML 𝗋Success 10000 (‹𝗋Success›) = ‹fn (ctxt,sequent) =>
raise Phi_Reasoner.Success (ctxt, @{thm 𝗋Success_I} RS sequent)›
subsection ‹Proof Obligation \& Guard of Rule \label{sec:proof-obligation}›
definition Premise :: "mode ⇒ bool ⇒ bool" where "Premise _ x = x"
abbreviation Normal_Premise ("𝗉𝗋𝖾𝗆𝗂𝗌𝖾 _" [27] 26)
where "Normal_Premise ≡ Premise default"
abbreviation Simp_Premise ("𝗌𝗂𝗆𝗉𝗋𝖾𝗆 _" [27] 26)
where "Simp_Premise ≡ Premise MODE_SIMP"
abbreviation Proof_Obligation ("𝗈𝖻𝗅𝗂𝗀𝖺𝗍𝗂𝗈𝗇 _" [27] 26)
where "Proof_Obligation ≡ Premise MODE_COLLECT"
text ‹
\<^prop>‹Premise mode P› represents an ordinary proposition has to be proved during the reasoning.
There are different modes expressing different roles in the reasoning.
➧ \<^prop>‹𝗌𝗂𝗆𝗉𝗋𝖾𝗆 P› is a ∗‹guard› of a rule, which constrains that the rule is appliable only
when \<^prop>‹P› can be solved ∗‹automatically› during the reasoning.
If \<^prop>‹P› fails to be solved, even if it is actually valid, the rule will not be applied.
Therefore, \<^prop>‹P› has to be as simple as possible. The tactic used to solve \<^prop>‹P› is
@{method clarsimp}.
A more powerful tactic like @{method auto} is not adoptable because the tactic must be safe and
non-blocking commonly.
A blocking search branch blocks the whole reasoning, which is not acceptable.
\<^prop>‹𝗌𝗂𝗆𝗉𝗋𝖾𝗆 P› is not for proof obligations that are intended to be solved by users.
It is more like 'controller or switch' of the rules, i.e. ∗‹guard›.
➧ \<^prop>‹𝗉𝗋𝖾𝗆𝗂𝗌𝖾 P› represents a proof obligation.
Proof obligations in reasoning rules should be represented by it.
➧ \<^prop>‹𝗈𝖻𝗅𝗂𝗀𝖺𝗍𝗂𝗈𝗇 Q› by contrast
represents proof obligations ‹Q› that are ready to be solved by user (or by automatic tools).
›
text ‹
The difference between \<^prop>‹𝗈𝖻𝗅𝗂𝗀𝖺𝗍𝗂𝗈𝗇 Q› and \<^prop>‹𝗉𝗋𝖾𝗆𝗂𝗌𝖾 P› is subtle:
In a reasoning process, many reasoning rules may be applied, which may generate many
\<^prop>‹𝗉𝗋𝖾𝗆𝗂𝗌𝖾 P›.
The engine tries to solve \<^prop>‹𝗉𝗋𝖾𝗆𝗂𝗌𝖾 P› automatically but if it fails the search branch
would be stuck. Because the search has not been finished, it is bad to ask users' intervention
to solve the goal because the search branch may high-likely fail later.
It is ∗‹not ready› for user to solve ‹P› here, and suggestively ‹P› should be deferred to
an ideal moment for user solving obligations.
This is `ideal moment' is \<^prop>‹𝗈𝖻𝗅𝗂𝗀𝖺𝗍𝗂𝗈𝗇 Q›. If any \<^prop>‹𝗈𝖻𝗅𝗂𝗀𝖺𝗍𝗂𝗈𝗇 Q› exists in the antecedents
of the sequent, the engine contracts ‹P› into the latest \<^prop>‹𝗈𝖻𝗅𝗂𝗀𝖺𝗍𝗂𝗈𝗇 Q›, e.g., from
\[ ‹𝗉𝗋𝖾𝗆𝗂𝗌𝖾 P ⟹ A1 ⟹ 𝗈𝖻𝗅𝗂𝗀𝖺𝗍𝗂𝗈𝗇 Q ⟹ 𝗈𝖻𝗅𝗂𝗀𝖺𝗍𝗂𝗈𝗇 Q' ⟹ ⋯ › \]
it deduces
\[ ‹A1 ⟹ 𝗈𝖻𝗅𝗂𝗀𝖺𝗍𝗂𝗈𝗇 Q ∧ P ⟹ 𝗈𝖻𝗅𝗂𝗀𝖺𝗍𝗂𝗈𝗇 Q' ⟹ ⋯ › \]
In short, \<^prop>‹𝗈𝖻𝗅𝗂𝗀𝖺𝗍𝗂𝗈𝗇 Q› collects obligations generated during a reasoning process,
and enables user to solve them at an idea moment.
A typical reasoning request (the initial reasoning state namely the argument of the reasoning
process) is of the following form,
\[ ‹Problem ⟹ 𝗋Success ⟹ 𝗈𝖻𝗅𝗂𝗀𝖺𝗍𝗂𝗈𝗇 True ⟹ Conclusion› \]
The ‹True› represents empty collection or none obligation.
If the reasoning succeeds, it returns sequent in form
\[ ‹𝗈𝖻𝗅𝗂𝗀𝖺𝗍𝗂𝗈𝗇 True ∧ P1 ∧ P2 ∧ ⋯ ⟹ Conclusion› \]
where ‹P1, P2, ⋯› are obligations generated by reasoning ‹Problem›.
And then, user may solve the obligations manually or by automatic tools.
For antecedent \<^prop>‹𝗈𝖻𝗅𝗂𝗀𝖺𝗍𝗂𝗈𝗇 Q›,
if there is another \<^prop>‹𝗈𝖻𝗅𝗂𝗀𝖺𝗍𝗂𝗈𝗇 Q'› in the remaining antecedents,
the reasoner also defer ‹Q› to ‹Q'›, just like \<^prop>‹𝗈𝖻𝗅𝗂𝗀𝖺𝗍𝗂𝗈𝗇 Q› is a \<^prop>‹𝗉𝗋𝖾𝗆𝗂𝗌𝖾 Q›.
If no \<^prop>‹𝗈𝖻𝗅𝗂𝗀𝖺𝗍𝗂𝗈𝗇 Q'› exists in the remaining antecedents,
the reasoner of \<^prop>‹𝗉𝗋𝖾𝗆𝗂𝗌𝖾 P› and \<^prop>‹𝗈𝖻𝗅𝗂𝗀𝖺𝗍𝗂𝗈𝗇 Q› raises
an error aborting the whole reasoning, because the reasoning request is not configured correctly.
Semantically, \<^prop>‹𝗈𝖻𝗅𝗂𝗀𝖺𝗍𝗂𝗈𝗇 Q› represents a proof obligation ‹Q› intended to be addressed by
user. It can be deferred but the reasoner never attempts to solve \<^prop>‹𝗈𝖻𝗅𝗂𝗀𝖺𝗍𝗂𝗈𝗇 Q› practically.
Nonetheless, we still provide tool for reasoning obligations automatically, albeit they have
to be called separately with the reasoning engine. See ▩‹auto_obligation_solver› and
▩‹safer_obligation_solver› in 🗏‹library/reasoners.ML›.
›
lemma Premise_I[intro!]: "P ⟹ Premise mode P" unfolding Premise_def by simp
lemma Premise_D: "Premise mode P ⟹ P" unfolding Premise_def by simp
lemma Premise_E[elim!]: "Premise mode P ⟹ (P ⟹ C) ⟹ C" unfolding Premise_def by simp
subsubsection ‹Implementation of the reasoners›
lemma Premise_True[φreason 5000]: "Premise mode True" unfolding Premise_def ..
lemma [φreason 5000]:
" Premise mode P
⟹ Premise mode (Premise any_mode P)"
unfolding Premise_def .
lemma Premise_refl[φreason 2000 for ‹Premise ?mode (?x = ?x)›
‹Premise ?mode (?x = ?var_x)›
‹Premise ?mode (?var_x = ?x)›]:
"Premise mode (x = x)"
unfolding Premise_def ..
lemma contract_obligations:
"(Premise mode P ⟹ 𝗈𝖻𝗅𝗂𝗀𝖺𝗍𝗂𝗈𝗇 Q ⟹ PROP C) ≡ (𝗈𝖻𝗅𝗂𝗀𝖺𝗍𝗂𝗈𝗇 P ∧ Q ⟹ PROP C)"
unfolding Premise_def by rule simp+
lemma contract_premise_true:
"(True ⟹ Premise mode B) ≡ Trueprop (Premise mode B) "
by simp
lemma contract_premise_imp:
"(A ⟹ Premise mode B) ≡ Trueprop (Premise mode (A ⟶ B)) "
unfolding Premise_def atomize_imp .
lemma contract_premise_all:
"(⋀x. Premise mode (P x)) ≡ Trueprop ( Premise mode (∀x. P x)) "
unfolding Premise_def atomize_all .
declare [[ML_debugger = true]]
ML ‹
structure Useful_Thms = Named_Thms (
val name = \<^binding>‹useful›
val description = "theorems to be inserted in the automatic proving, \
\having the same effect of using the @{command using} command."
)
›
setup ‹Useful_Thms.setup›
ML_file ‹library/PLPR_Syntax.ML›
ML_file "library/reasoners.ML"
φreasoner_ML Normal_Premise 10 (‹𝗉𝗋𝖾𝗆𝗂𝗌𝖾 ?P› | ‹𝗈𝖻𝗅𝗂𝗀𝖺𝗍𝗂𝗈𝗇 ?P›)
= ‹Phi_Reasoners.wrap Phi_Reasoners.defer_obligation_tac›
subsection ‹Reasoning Frame›
definition ‹𝗋BEGIN ⟷ True›
definition ‹𝗋END ⟷ True›
text ‹Antecedents \<^prop>‹𝗋BEGIN› and \<^prop>‹𝗋END› conform a nested reasoning scope
resembling a subroutine for specific reasoning tasks or problems.
\[ ‹… ⟹ 𝗋BEGIN ⟹ Nested ⟹ Reasoning ⟹ 𝗋END ⟹ …› \]
The scoped antecedents should be regarded as a ∗‹unit antecedent›
invoking a nested φ-LPR reasoning process and returning ∗‹only› the first reached solution (
just as the behaviour of φ-LPR engine).
During backtracking, search branches before the unit will be backtracked but sub-optimal solutions
of the unit are not backtracked.
In addition, cut is confined among the search paths in the scope as a unit.
Because of the cut and the reduced backtrack behavior, the performance is improved.
Sometimes a cut is admissible (green) as an expected behavior among several rules and reasoners
which constitute a loosely-gathered module for a specific problem.
However the cut is still not safe to be used because an external rule using the reasoning module
may demand the behavior of backtracking but the cut inside the module prevents
backtracks in the external rule.
In this case, the reasoning scope is helpful to wrap the loosely-gathered module to be confined
by closing side effects like cuts.
Specifically, any search path that reaches \<^prop>‹𝗋BEGIN› opens a new ∗‹frame› namely a space
of search paths.
The sub-searches continuing the path and before reaching
the paired \<^prop>‹𝗋END› are in this frame.
As φ-LPR works in BFS, a frame can contain another frame just if the search in the frame
encounters another \<^prop>‹𝗋BEGIN›.
\[ ‹… ⟹ 𝗋BEGIN ⟹ A⇩1 ⟹ 𝗋BEGIN ⟹ A⇩2 ⟹ 𝗋END ⟹ A⇩3 ⟹ 𝗋END ⟹ …› \]
Once any search path encounters a \<^prop>‹𝗋END›, the innermost frame is closed and the sequent of the
search path is returned with dropping all other branches in the frame.
The mechanism checks whether all \<^prop>‹𝗋BEGIN› and \<^prop>‹𝗋END› are paired.
Any global cut cuts all and only all search branches in the innermost frame to which the cut
belongs. \<^prop>‹𝗋Success› is prohibited in the nested scope because we do not know how to process
the remain antecedents after the \<^prop>‹𝗋Success› and how to return them into the outer scope.
›
definition 𝗋Call :: ‹prop ⇒ prop› ("𝗋CALL _" [3] 2)
where ‹𝗋Call P ≡ PROP P›
lemma 𝗋BEGIN_I: ‹𝗋BEGIN› unfolding 𝗋BEGIN_def ..
lemma 𝗋END_I: ‹𝗋END› unfolding 𝗋END_def ..
lemma 𝗋Call_I: ‹PROP P ⟹ 𝗋CALL PROP P› unfolding 𝗋Call_def .
ML_file ‹library/nested.ML›
φreasoner_ML 𝗋BEGIN 1000 (‹𝗋BEGIN›) = ‹PLPR_Nested_Reasoning.enter_scope›
φreasoner_ML 𝗋END 1000 (‹𝗋END›) = ‹PLPR_Nested_Reasoning.exit_scope›
φreasoner_ML 𝗋Call 1000 (‹PROP 𝗋Call _›) = ‹PLPR_Nested_Reasoning.call›
definition 𝗋Call_embed :: ‹bool ⇒ bool› where ‹𝗋Call_embed P ≡ P›
lemma [iso_atomize_rules, symmetric, iso_rulify_rules]:
‹𝗋Call (Trueprop P) ≡ Trueprop (𝗋Call_embed P)›
unfolding 𝗋Call_def 𝗋Call_embed_def .
subsection ‹Pruning›
text ‹At a reasoning state ‹A›, multiple search branches may be emitted parallel to
find a solution of the antecedent.
A branch may find the solution while other branches from ‹A› still remain in the search history.
Then the reasoning in DFS manner keeps to solve next antecedent ‹B› and we assume ‹B› fails.
The reasoning then backtrack, and redo the search of ‹A› on remaining branches of ‹A›.
It is not reasonable because the reasoning is redoing a solved problem on ‹A›.
To address this, a solution is to prune branches of ‹A› after ‹A› succeeds.
In this section we introduce ‹subgoal› mechanism achieving the pruning.
Each antecedent ‹A› is tagged with a goal context ‹G›, as ‹‹A ❙@❙G❙O❙A❙L G››.
A reasoning rule may check that the goal ‹G› has not been solved before doing any substantial
computation, e.g.,
\[ ‹CHK_SUBGOAL G ⟹ Computation ⟹ (Ant ❙@❙G❙O❙A❙L G)› \]
Antecedent ‹CHK_SUBGOAL G› succeeds only when the goal ‹G› is not marked solved, ∗‹or›, the current
search branch is the thread that marked ‹G› solved previously.
When a rule succeeds, the rule may mark the goal ‹G› solved to prune other branches that check ‹G›.
\[ ‹Computation ⟹ SOLVE_SUBGOAL G ⟹ (Ant ❙@❙G❙O❙A❙L G)› \]
If a goal ‹G› has been marked solved, any other antecedent ‹SOLVE_SUBGOAL G› marking ‹G› again, will
fail, unless the current search branch is the thread that marked ‹G› solved previously.
A subgoal is represented by an unspecified type which only has a syntactic effect in the reasoning.›
typedecl "subgoal"
consts subgoal_context :: ‹subgoal ⇒ action›
abbreviation GOAL_CTXT :: "prop ⇒ subgoal ⇒ prop" ("_ ❙@❙G❙O❙A❙L _" [2,1000] 2)
where "(PROP P ❙@❙G❙O❙A❙L G) ≡ (PROP P @action subgoal_context G)"
definition CHK_SUBGOAL :: "subgoal ⇒ bool"
where "CHK_SUBGOAL X ⟷ True"
definition SOLVE_SUBGOAL :: "subgoal ⇒ bool"
where "SOLVE_SUBGOAL X ⟷ True"
text ‹Subgoals are hierarchical, having the unique top-most goal named ‹‹TOP_GOAL››.
New goal contexts are obtained by antecedent ‹‹SUBGOAL G ?G'›› which assigns a new subgoal
under an unsolved ‹G› to output variable ‹?G'›.
The reasoning raises an error if ‹?G'› is not a schematic variable.
‹‹SOLVE_SUBGOAL G›› marks the goal ‹G› and all its subgoals solved.
The ‹TOP_GOAL› can never be solved.›
consts TOP_GOAL :: "subgoal"
definition SUBGOAL :: "subgoal ⇒ subgoal ⇒ bool" where "SUBGOAL ROOT NEW_GOAL = True"
subsubsection ‹Implementation of the Subgoal Reasoners›
lemma SUBGOAL_I[iff]: "SUBGOAL ROOT NEWGOAL" unfolding SUBGOAL_def ..
lemma CHK_SUBGOAL_I[iff]: "CHK_SUBGOAL X" unfolding CHK_SUBGOAL_def ..
lemma SOLVE_SUBGOAL_I[iff]: "SOLVE_SUBGOAL X" unfolding SOLVE_SUBGOAL_def ..
ML_file ‹library/Subgoal_Env.ML›
φreasoner_ML SUBGOAL 2000 (‹SUBGOAL ?ROOT ?NEWGOAL›) = ‹Subgoal_Env.subgoal›
φreasoner_ML CHK_SUBGOAL 2000 (‹CHK_SUBGOAL ?GOAL›) = ‹Subgoal_Env.chk_subgoal›
φreasoner_ML SOLVE_SUBGOAL 9900 (‹SOLVE_SUBGOAL ?GOAL›) = ‹Subgoal_Env.solve_subgoal›
lemma [φreason 800 for ‹Try ?S ?P ❙@❙G❙O❙A❙L ?G›]:
‹ P ❙@❙G❙O❙A❙L G
⟹ Try True P ❙@❙G❙O❙A❙L G›
unfolding Try_def .
subsection ‹Branch›
text ‹‹A ||| B› is an antecedent way to encode search branch.
Compared with the ordinary approach using multiple submissive rules,
short-cut is featured by using subgoal. It tries each antecedent from left to right until
the first success of solving an antecedent, and none of the remains are attempted.›
definition Branch :: ‹prop ⇒ prop ⇒ prop› (infixr "|||" 3)
where ‹Branch A B ≡ (⋀C. (PROP A ⟹ C) ⟹ (PROP B ⟹ C) ⟹ C)›
definition Branch_embed :: ‹bool ⇒ bool ⇒ bool›
where ‹Branch_embed A B ≡ A ∨ B›
lemma atomize_Branch:
‹Branch (Trueprop A) (Trueprop B) ≡ Trueprop (A ∨ B)›
unfolding Branch_def or_def atomize_eq atomize_imp atomize_all .
lemma [iso_atomize_rules, symmetric, iso_rulify_rules]:
‹Branch (Trueprop A) (Trueprop B) ≡ Trueprop (Branch_embed A B)›
unfolding Branch_embed_def atomize_Branch .
subsubsection ‹Implementation›
lemma Branch_L:
‹ PROP A
⟹ PROP A ||| PROP B›
unfolding Action_Tag_def Branch_def
proof -
assume A: ‹PROP A›
show ‹(⋀C. (PROP A ⟹ C) ⟹ (PROP B ⟹ C) ⟹ C)› proof -
fix C :: "bool"
assume A': ‹PROP A ⟹ C›
show ‹C› using A'[OF A] .
qed
qed
lemma Branch_R:
‹ PROP B
⟹ PROP A ||| PROP B›
unfolding Action_Tag_def Branch_def
proof -
assume B: ‹PROP B›
show ‹(⋀C. (PROP A ⟹ C) ⟹ (PROP B ⟹ C) ⟹ C)› proof -
fix C :: "bool"
assume B': ‹PROP B ⟹ C›
show ‹C› using B'[OF B] .
qed
qed
declare [[φreason 1000 Branch_L Branch_R for ‹PROP ?A ||| PROP ?B›]]
subsection ‹Simplification \& Rewrite›
text ‹‹‹𝗌𝗂𝗆𝗉𝗅𝗂𝖿𝗒[mode] ?result : term›› is generic antecedent for simplifying ‹term› in different
‹mode›. The ‹?result› should be an output variable for the result of the simplification.
We implement a ‹default› mode where the system simple-set is used to simplify
‹term›. Users may configure their mode and their reasoner using different simple-set.›
definition Simplify :: " mode ⇒ 'a ⇒ 'a ⇒ bool " ("𝗌𝗂𝗆𝗉𝗅𝗂𝖿𝗒[_] _ :/ _" [10,1000,10] 9)
where "Simplify setting result origin ⟷ result = origin"
definition Do_Simplificatin :: ‹'a ⇒ 'a ⇒ prop›
where ‹Do_Simplificatin result origin ≡ (result ≡ origin)›
lemma [cong]: "A ≡ A' ⟹ Simplify s x A ≡ Simplify s x A' " by simp
lemma Simplify_D: ‹Simplify m A B ⟹ A = B› unfolding Simplify_def .
lemma Simplify_I: ‹A = B ⟹ Simplify m A B› unfolding Simplify_def .
lemma Do_Simplification:
‹PROP Do_Simplificatin A B ⟹ Simplify s A B›
unfolding Do_Simplificatin_def Simplify_def atomize_eq .
lemma End_Simplification : ‹PROP Do_Simplificatin A A› unfolding Do_Simplificatin_def .
lemma End_Simplification': ‹𝗉𝗋𝖾𝗆𝗂𝗌𝖾 A = B ⟹ PROP Do_Simplificatin A B›
unfolding Do_Simplificatin_def Premise_def atomize_eq .
ML_file ‹library/simplifier.ML›
hide_fact End_Simplification' End_Simplification Do_Simplification
subsubsection ‹Default Simplifier›
abbreviation Default_Simplify :: " 'a ⇒ 'a ⇒ bool " ("𝗌𝗂𝗆𝗉𝗅𝗂𝖿𝗒 _ : _" [1000,10] 9)
where "Default_Simplify ≡ Simplify default"
φreasoner_ML Default_Simplify 1000 (‹Default_Simplify ?X' ?X›)
= ‹PLPR_Simplifier.simplifier NONE I›
φreasoner_ML Simp_Premise 10 (‹𝗌𝗂𝗆𝗉𝗋𝖾𝗆 ?P›)
= ‹PLPR_Simplifier.simplifier NONE I›
subsection ‹Optimal Solution›
text ‹φ-LPR is priority-driven DFS searching the first reached solution which may not be the optimal
one for certain measure. The section gives a way to find out the solution of the minimum cost
among a given set of candidates.
›
definition Optimum_Solution :: ‹prop ⇒ prop› where [iff]: ‹Optimum_Solution P ≡ P›
definition [iff]: ‹Begin_Optimum_Solution ⟷ True›
definition [iff]: ‹End_Optimum_Solution ⟷ True›
text ‹Each individual invocation of ‹Optimum_Solution P›
invokes an individual instance of the optimal solution reasoning.
The reasoning of ‹P› is proceeded exhaustively meaning exploring all backtracks except local cuts.
›
paragraph ‹Candidates›
text ‹The candidates are all search branches diverged from the antecedents marked by ›
text ‹For the antecedents marked by ‹𝗋Choice›, the mechanism traverses exhaustively all
combinations of their (direct) solvers, but for other not marked antecedents, the strategy is
not changed and is as greedy as the usual behavior --- returning the first-reached solution
and discarding the others.
As an example, in
‹Begin_Optimum_Solution ⟹ 𝗋Choice A ⟹ B ⟹ 𝗋Choice C ⟹ End_Optimum_Solution ⟹ …›,
assuming both ‹A,B,C› have 2 solvers ‹A⇩1,A⇩2,B⇩1,B⇩2,C⇩1,C⇩2› and assuming ‹B⇩1› have higher priority
than ‹B⇩2› and can success, the mechanism traverses 4 combination of the solvers ‹A⇩1,C⇩1›,
‹A⇩1,C⇩2›, ‹A⇩2,C⇩1›, ‹A⇩2,C⇩2›, i.e., only exhaustively on ‹𝗋Choice›-marked antecedents but still
greedy on others.
Note, even marked by ‹𝗋Choice›, local cuts are still valid and cuts search branches.
Global cut is disabled during the whole reasoning because it kills other search branches.
‹𝗋Success› is available and the mechanism ensures it is always the optimal one invokes the ‹𝗋Success›.
›
paragraph ‹Cost›
text ‹The cost is measured by reports from the following antecedents inserted in the user rules.›
definition Incremental_Cost :: ‹int ⇒ bool› where [iff]: ‹Incremental_Cost _ = True›
definition Threshold_Cost :: ‹int ⇒ bool› ("𝗍𝗁𝗋𝖾𝗌𝗁𝗈𝗅𝖽") where [iff]: ‹Threshold_Cost _ = True›
text ‹The final cost of a reasoning process is the sum of all the reported ‹Incremental_Cost› or
the maximum ‹Threshold_Cost›, the one which is larger.
If the cost of two branches are the same, the first reached one is considered better.
›
subsubsection ‹Implementation›
definition Optimum_Solution_embed :: ‹bool ⇒ bool› where ‹Optimum_Solution_embed P ≡ P›
lemma [iso_atomize_rules, symmetric, iso_rulify_rules]:
‹Optimum_Solution (Trueprop P) ≡ Trueprop (Optimum_Solution_embed P)›
unfolding Optimum_Solution_embed_def Optimum_Solution_def .
lemma Incremental_Cost_I: ‹Incremental_Cost X› unfolding Incremental_Cost_def ..
lemma Threshold_Cost_I: ‹Threshold_Cost X› unfolding Threshold_Cost_def ..
lemma Begin_Optimum_Solution_I: ‹Begin_Optimum_Solution› unfolding Begin_Optimum_Solution_def ..
lemma End_Optimum_Solution_I: ‹End_Optimum_Solution› unfolding End_Optimum_Solution_def ..
lemma Do_Optimum_Solution:
‹ PROP X
⟹ End_Optimum_Solution
⟹ PROP Optimum_Solution X›
unfolding Optimum_Solution_def .
ML_file_debug ‹library/optimum_solution.ML›
φreasoner_ML Incremental_Cost 1000 (‹Incremental_Cost _›) = ‹fn (ctxt,sequent) => Seq.make (fn () =>
let val _ $ (_ $ N) = Thm.major_prem_of sequent
val (_, n) = HOLogic.dest_number N
val sequent' = @{thm Incremental_Cost_I} RS sequent
in Seq.pull (PLPR_Optimum_Solution.report_cost (n,0) (ctxt,sequent'))
end
)›
φreasoner_ML Threshold_Cost 1000 (‹Threshold_Cost _›) = ‹fn (ctxt,sequent) => Seq.make (fn () =>
let val _ $ (_ $ N) = Thm.major_prem_of sequent
val (_, n) = HOLogic.dest_number N
val sequent' = @{thm Threshold_Cost_I} RS sequent
in Seq.pull (PLPR_Optimum_Solution.report_cost (0,n) (ctxt,sequent'))
end
)›
φreasoner_ML Optimum_Solution 1000 (‹PROP Optimum_Solution _›) = ‹
apsnd (fn th => @{thm Do_Optimum_Solution} RS th)
#> PLPR_Optimum_Solution.start
›
φreasoner_ML Begin_Optimum_Solution 1000 (‹Begin_Optimum_Solution›) = ‹
apsnd (fn th => @{thm Begin_Optimum_Solution_I} RS th)
#> PLPR_Optimum_Solution.start
›
φreasoner_ML End_Optimum_Solution 1000 (‹End_Optimum_Solution›) = ‹
apsnd (fn th => @{thm End_Optimum_Solution_I} RS th)
#> PLPR_Optimum_Solution.finish
›
subsubsection ‹Derivations›
definition Optimum_Among :: ‹prop ⇒ prop› where ‹Optimum_Among Candidates ≡ Candidates›
definition Optimum_Among_embed :: ‹bool ⇒ bool› where ‹Optimum_Among_embed X ≡ X›
lemma [iso_atomize_rules, symmetric, iso_rulify_rules]:
‹Optimum_Among (Trueprop P) ≡ Trueprop (Optimum_Among_embed P)›
unfolding Optimum_Among_embed_def Optimum_Among_def .
subsection ‹Environment Variables›
definition Push_Envir_Var :: ‹'name ⇒ 'a::{} ⇒ bool›
where ‹Push_Envir_Var Name Val ⟷ True›
definition Pop_Envir_Var :: ‹'name ⇒ bool› where ‹Pop_Envir_Var Name ⟷ True›
definition Get_Envir_Var :: ‹'name ⇒ 'a::{} ⇒ bool›
where ‹Get_Envir_Var Name Return ⟷ True›
definition Get_Envir_Var' :: ‹'name ⇒ 'a::{} ⇒ 'a ⇒ bool›
where ‹Get_Envir_Var' Name Default Return ⟷ True›
subsubsection ‹Implementation›
ML_file ‹library/envir_var.ML›
lemma Push_Envir_Var_I: ‹Push_Envir_Var N V› unfolding Push_Envir_Var_def ..
lemma Pop_Envir_Var_I: ‹Pop_Envir_Var N› unfolding Pop_Envir_Var_def ..
lemma Get_Envir_Var_I : ‹Get_Envir_Var N V› for V :: ‹'v::{}› unfolding Get_Envir_Var_def ..
lemma Get_Envir_Var'_I: ‹Get_Envir_Var' N D V› for V :: ‹'v::{}› unfolding Get_Envir_Var'_def ..
φreasoner_ML Push_Envir_Var 1000 (‹Push_Envir_Var _ _›) = ‹fn (ctxt,sequent) => Seq.make (fn () =>
let val _ $ (_ $ N $ V) = Thm.major_prem_of sequent
val _ = if maxidx_of_term V <> ~1
then warning "PLPR Envir Var: The value to be assigned has schematic variables \
\which will not be retained!"
else ()
in SOME ((PLPR_Env.push (PLPR_Env.name_of N) V ctxt,
@{thm Push_Envir_Var_I} RS sequent),
Seq.empty) end
)›
φreasoner_ML Pop_Envir_Var 1000 (‹Pop_Envir_Var _›) = ‹fn (ctxt,sequent) => Seq.make (fn () =>
let val _ $ (_ $ N) = Thm.major_prem_of sequent
in SOME ((PLPR_Env.pop (PLPR_Env.name_of N) ctxt, @{thm Pop_Envir_Var_I} RS sequent),
Seq.empty) end
)›
φreasoner_ML Get_Envir_Var 1000 (‹Get_Envir_Var _ _›) = ‹fn (ctxt,sequent) => Seq.make (fn () =>
let val _ $ (_ $ N $ _) = Thm.major_prem_of sequent
val idx = Thm.maxidx_of sequent + 1
in case PLPR_Env.get (PLPR_Env.name_of N) ctxt
of NONE => Phi_Reasoner.error
("No enviromental variable " ^ PLPR_Env.name_of N ^ " is set")
| SOME V' =>
let val V = Thm.incr_indexes_cterm idx (Thm.cterm_of ctxt V')
in SOME ((ctxt, ( @{thm Get_Envir_Var_I}
|> Thm.incr_indexes idx
|> Thm.instantiate (TVars.make [((("'v",idx),[]), Thm.ctyp_of_cterm V)],
Vars.make [((("V", idx),Thm.typ_of_cterm V), V)])
) RS sequent),
Seq.empty)
end
end
)›
φreasoner_ML Get_Envir_Var' 1000 (‹Get_Envir_Var' _ _ _›) = ‹fn (ctxt,sequent) => Seq.make (fn () =>
let val _ $ (_ $ N $ D $ _) = Thm.major_prem_of sequent
val idx = Thm.maxidx_of sequent + 1
val V = Thm.cterm_of ctxt (case PLPR_Env.get (PLPR_Env.name_of N) ctxt
of SOME V => V | NONE => D)
|> Thm.incr_indexes_cterm idx
in SOME ((ctxt, ( @{thm Get_Envir_Var'_I}
|> Thm.incr_indexes idx
|> Thm.instantiate (TVars.make [((("'v",idx),[]), Thm.ctyp_of_cterm V)],
Vars.make [((("V", idx),Thm.typ_of_cterm V), V)])
) RS sequent),
Seq.empty)
end
)›
subsection ‹Recursion Guard›
definition 𝗋Recursion_Guard :: ‹'a::{} ⇒ prop ⇒ prop› ("𝗋RECURSION'_GUARD'(_')/ _" [2,2] 2)
where [iff]: ‹(𝗋RECURSION_GUARD(X) (PROP P)) ≡ PROP P›
text ‹\<^prop>‹𝗋RECURSION_GUARD(X) (PROP P)› annotates the reasoning of \<^prop>‹P› is about goal \<^term>‹X›.
It remembers \<^term>‹X› and once in the following reasoning the same goal \<^term>‹X› occurs again,
it aborts the search branch because an infinite recursion happens.›
definition 𝗋Recursion_Guard_embed :: ‹'a::{} ⇒ bool ⇒ bool›
where ‹𝗋Recursion_Guard_embed _ P ≡ P›
lemma [iso_atomize_rules, symmetric, iso_rulify_rules]:
‹𝗋Recursion_Guard X (Trueprop P) ≡ Trueprop (𝗋Recursion_Guard_embed X P)›
unfolding 𝗋Recursion_Guard_embed_def 𝗋Recursion_Guard_def .
subsubsection ‹Implementation›
definition 𝗋Recursion_Residue :: ‹'a::{} ⇒ bool›
where ‹𝗋Recursion_Residue _ ≡ True›
lemma Do_𝗋Recursion_Guard:
‹ PROP P
⟹ 𝗋Recursion_Residue X
⟹ 𝗋RECURSION_GUARD(X) (PROP P) ›
unfolding 𝗋Recursion_Guard_def .
lemma [φreason 1000]:
‹𝗋Recursion_Residue X›
unfolding 𝗋Recursion_Residue_def ..
ML_file ‹library/recursion_guard.ML›
φreasoner_ML 𝗋Recursion_Guard 1000 (‹𝗋RECURSION_GUARD(?X) (PROP ?P)›) = ‹PLPR_Recursion_Guard.reason›
hide_fact Do_𝗋Recursion_Guard
end