This post is mostly an excuse for me to understand solutions to some problems that I thought were hard. I'll try to structure things in a way so that if you still want to solve the problems yourself then you can just read enough for a hint and then try some things with the hint in mind. There is no shame in looking at the solutions because I had to look myself. The informal proofs were easy but convincing Coq of the informal reasoning was beyond me so I looked up the solutions.
The problems come from the chapter on inductive propositions. The chapter itself is a bit unevenly paced so don't fret if you have trouble with it.
The main takeaway so far for me is that mechanical proofs are tedious. One reason I think is that it's hard to push assumptions to the relevant places during a proof. There were several times that I started a proof and hit a dead end because I started the induction process with the wrong definition or I was recursing through one structure and needed some assumptions from another structure and couldn't thread it through properly whereas in the informal proof it wasn't a big deal because the threading was obvious. I ran into this issue with the following theorem about regular expressions and the matching relation between strings and regular expressions
Lemma MStar'' : forall T (s : list T) (re : reg_exp), s =~ Star re -> exists ss : list (list T), s = fold app ss  /\ forall s', In s' ss -> s' =~ re.
Intuitively the theorem says that if a string
s matches a starred regular expression (
Star re) then it must have been the case that the string consisted of concatenated copies of strings that matched the unstarred regular expression (
This statement is obviously true because we can imperatively reason our way to why it's true. If the string is empty then there is nothing to do. If the string is not empty then the starred expression reduces to matching with the unstarred expression by definition. I'm using the definition from the chapter (this was a bit of hurdle I had to get through because I kept thinking of regular expressions like they are in Ruby and that's not how these are defined. These are more like the definition of regular languages and automatons).
So assuming the string is not empty we find the longest prefix that matches and chop it off. This chopped off element goes into our list of matches. We repeat this pattern until only the empty string is left. That's it, we just proved the statement must be true and even gave an implementation strategy for constructing the list of matches in the string. Unfortunately if you try to use this strategy to prove the theorem you'll immediately hit a wall and get stuck because starting the proof with induction on
s will hit a dead end.
The problem is that the definitions are not set up in a way that let us reason around the string being non-empty. The definitions are set up around reasoning about the structure of the regular expressions and strings therein. So that's the first hint/tip: whenever faced with a proof that requires structural induction it's always better to set up the induction over the structure that subsumes as many elements in the theorem as possible. This way we retain more information and, if necessary, can set up nested recursions over the included structures and carry the initial hypothesis about the containing structure. This is only a heuristic by the way and there might be situations where reversing the order is the correct approach. I haven't yet ran into such a situation but I'm sure there must be such cases.
If you still want to solve this yourself then don't read any further and set up the induction over the matching evidence
s =~ Star re.
Proof. intros T s r H. remember (Star r) as r'. (* Most cases are not interesting so we clear them away. *) induction H as [| | | | |r'|s s' r'' H _ H' IH']; try (inversion Heqr' as [Heq]). (* Empty Star pattern case is easy to clear. *) - exists ; split; trivial; intros s F; inversion F. (* Now we have the real interesting case of trying to match up a string that has two parts. By the inductive hypothesis we know that the first part matches and the second part matches so the two together must also match the Star pattern but we gotta do some work to convince Coq. *) (* First let's clear up the vacuous and false assumptions and replace them with things we can actually use to get closer to our goal. The substitutions are very helpful because they clear up the clutter in the assumptions and provide guidance on how to perform the existential substitution. *) - subst; apply IH' in Heqr'; destruct Heqr' as [x Hx]; clear IH'. destruct Hx as [Hfs' Hxr]; subst; clear H'. (* After performing the substitutions we have only 1 possible choice for instantiating the existential variable because we have to satisfy [s ++ fold app x [ ] = fold app ss [ ]]. This means the only choice for [ss] is [s :: x] so that's what we instantiate with and the rest follows from typical backward chaining. *) exists (s :: x); split; simpl; trivial. intros s' H''; destruct H''. subst; trivial. apply Hxr; trivial.
The other interesting theorem in that section was the pumping lemma and for this one I don't really have any intuitions. I don't have an informal argument for why it is true other than pointing to the equivalence about finite automatons and how any string longer than the length of the states in the automaton must revisit some states and form a loop. While doing some obvious searches I found a post about regularity and derivatives which might be useful to develop an intuition for it.
The one hint I'll give you is that at a certain point you have to do induction over
m but it's not obvious where this point should be so here are a few lemmas that might help you on the way
(* Used for MApp *) Theorem sum_leq : forall a b c d, (a + b) <= (c + d) -> a <= c \/ b <= d. Proof. intros; omega. Qed. (* Used in MunionL and MunionR *) Theorem sum_leq' : forall a b c, a + b <= c -> a <= c /\ b <= c. Proof. intros; omega. Qed. (* Used in MStarApp *) Theorem sum_leq'' : forall a b, 1 <= a + b -> 1 <= a \/ 1 <= b. Proof. intros; omega. Qed.
The omega tactic makes it really easy to derive these theorems and they're used in the pumping lemma for forward chaining with the inductive assumptions. There is another way to make progress in the last case by inferring that if the sum of the lengths of two strings is at least one then at least one of them must be non-empty. I recommend trying to finish the proof that way as well to get a feel for how each proof plays out.
With all of that out of the way here's the proof for the pumping lemma
Lemma pumping : forall T (re : @reg_exp T) s, s =~ re -> pumping_constant re <= length s -> exists s1 s2 s3, s = s1 ++ s2 ++ s3 /\ s2 <>  /\ forall m, s1 ++ napp m s2 ++ s3 =~ re. Proof. intros T re s Hmatch. induction Hmatch as [| x | s1 re1 s2 re2 Hmatch1 IH1 Hmatch2 IH2 | s1 re1 re2 Hmatch IH | re1 s2 re2 Hmatch IH | re | s1 s2 re Hmatch1 IH1 Hmatch2 IH2]. - (* [MEmpty] *) simpl; omega. - (* [MChar] *) simpl; omega. - (* [MApp] *) simpl; rewrite -> app_length; intro H; apply sum_leq in H; destruct H. (* [pumping_constant re1 <= length s1] *) apply IH1 in H. destruct H as [x0 [x1 [x2]]]; destruct H as [Hs' [Hs'' Hs''']]; clear IH1; clear IH2; subst. (* If you look at the equations that must be satisfies then we don't have a choice about how to instantiate things. This is the only possible option that will work. *) exists (x0), (x1), (x2 ++ s2). split; repeat (rewrite <- app_assoc); trivial. split; trivial. intros; repeat (rewrite -> app_assoc); apply MApp. rewrite <- app_assoc. trivial. trivial. (* [pumping_constant re2 <= length s2] *) apply IH2 in H. destruct H as [x0 [x1 [x2]]]; destruct H as [Hs' [Hs'' Hs''']]; clear IH2; clear IH1; subst. (* Similar to the previous case our hand here is forced. *) exists (s1 ++ x0), (x1), (x2). split; repeat (rewrite <- app_assoc); trivial. split; trivial. intros; rewrite <- app_assoc; apply MApp. trivial. trivial. - (* [MUnionL] *) simpl; intro; apply sum_leq' in H; destruct H as [H _]; apply IH in H; clear IH. destruct H as [x0 [x1 [x2]]]; destruct H as [Hs [Hx]]. (* The assumptions again force a unique instantiation. *) exists x0, x1, x2. repeat (split; trivial). intro; apply MUnionL; trivial. - (* [MUnionR] *) simpl; intro; apply sum_leq' in H; destruct H as [_ H]. apply IH in H; clear IH. destruct H as [x0 [x1 [x2]]]; destruct H as [Hs [Hx]]. (* Same as above. We have no choice about how to instantiate. *) exists x0, x1, x2. repeat (split; trivial). intro; apply MUnionR; trivial. - (* [MStar0] *) simpl; omega. - (* [MStarApp] *) simpl in *. intro; rewrite app_length in H; apply sum_leq'' in H; destruct H. (* [1 <= length s1] The string must be non-empty and the assumptions force our choice to be [s1] *) exists , s1, s2. simpl; split; trivial. split. destruct s1; simpl in H. inversion H. unfold not. intro F. inversion F. (* This is the non-intuitive place with induction. *) induction m. simpl; trivial. simpl; rewrite <- app_assoc; apply MStarApp. trivial. trivial. (* [1 <= length s2] This one is a little tricky because we actually have some forward chaining to do before instantiating the existential. So instantiating the existential ahead of time doesn't work as it did above. *) apply IH2 in H; clear IH2; clear IH1. destruct H as [x0 [x1 [x2]]]; destruct H as [H [H' H'']]. (* Now the asumptions force our hand and it's basically the same as the case for [MApp] and it kinda makes sense because [MStarApp] does have a case similar to [MApp]. *) exists (s1 ++ x0), (x1), (x2); subst. split; repeat (rewrite app_assoc); trivial. split; trivial. intros; rewrite <- app_assoc; apply MStarApp. trivial. trivial. Qed.
To be honest I'm no closer to understanding the pumping lemma before the proof than I was after the proof but I think I have a better understanding of how I'd go about trying to come up with pump-able strings like the book says. I haven't put this to the test but I'm convinced following along the proof would be a good guide for writing this code. One could also just use code extraction but where's the fun in that?