History, Advancements, and projectual choices.

The original idea of HELM goes back to the end of 1997.
Here is just the progress report starting november 1999.


November 2 1999

On the fragmentation of the library.

There is a main projectual choice concerning the degree of fragmentation of the library. There are two extreme possibilities:
  1. Big XML documents, corresponding, say, to typical libraries of proof assistant applications (these are quite big, usually).
  2. Very small documents, each one containing a single information "unit". Here, there seem to be two main possibilities, according to what we consider as a unit:
    1. A document for each term;
    2. A document for each definition (i.e. a pair of terms).
Solution 2 has two main advantages:
1. independent access to each information unit.
2. clear distinction between the "low" levels of the components of the library (terms) and the "high" - or structural - levels (sections, chapters, etc. ).

Let us discuss them. Having independent access to each information unit could be a drawback in case the typical access follows some strong locality pinciple. This is probably the case both for browsing (we would like to "scroll" along some coherent piece of "theory"), and type-checking. However, for type checking, a suitable caching mechanism could be probably enough, while for browsing we are already making confusion between the term-level and the structural one.

The idea is that at the term level we only have logical dependencies, corresponding to pointers to other documents. These documents are collected in a hierarchy of directories, reflecting their main structure. But nothing prevenst to add more documents, e.g. metadata documents, for instance for browsing purposes.

Metadata documents should (also?) describe how to structure a presentation of the theory. They should be generated either manually (the user can change the presentation of a theory) or automatically (from the .v files: the presentation choice of the original author).

It seems plausible that the term and the structural level could be developed in a quite independent way. In particular, the term level could (should?) be independent from the higher levels. Of course, a term has (implicit) information about its logical context (e.g. the "environment"). It is not so clear if it could be worth to make explicit part of this implicit information. Adding redundancy to the information raise the problem of preserving its internal consistency.

Another point to be considered is that the decomposition in single information units seems to go somewhat against the very nature of hypertextual documents. In particular, the final aim is to avoid "anchors" inside a same file (although some "anchors" would probably remain inside "mutual inductive" definitions). This seems to be a bit contradictory with the choice of XML, and its features. In other words we just use XML as a convenient (?) exchange format, and little more.

Finally, the distinction between 2.1 and 2.2 does not seem to be a major one. Probably 2.1 is better. Intermediate solutions between the two extremes A and B do not seem to be particularly appealing. 


November 26 1999.

Universes

The implicit handling of the level of Universes can be problematic. Here is an example of two theorems which are both (independently) correct in Coq, but whose "conjunction" is not...(!) Better, the two theorems cannot be imported at the same time.
file definitions.v
Definition P := Type -> Type.
Definition Q := Type -> Type.
Definition p : P := [x:Type]x.
Definition q : Q := [x:Type]x.

file one.v
Require definitions.
Theorem easy1 : (a : (p Q))True.
Auto.
Qed.

file two.v
Require definitions.
Theorem easy2 : (a : (q P))True.
Auto.
Qed.

We have no solution at present. Probably, there is none: the user must just be warned of this possible behaviour. Does the management of Universes conflict with our Caching mechanism?

Access to data.

Regarding the fragmentation of the library, we have finally adopted solution 2.2. This raises some other concerns about the management of sections. In Coq, we have two possible representations for objects defined inside sections, namely cooked or uncooked. Roughly, cooking a term amounts to abstracting the term w.r.t. its local variables; on the other side an uncooked term is represented as a "receipe", that is essentially a closure.

Since sections can be nested, the same term has different representations (views) at different nesting levels (possibly cooked or uncooked).

The "cooking" policy of COQ is far from clear.

Saving the library in XML, we would like to avoid to explicitly save all the possible views of terms. The natural solution is to keep only the definition at the deepest nesting level (i.e. the actual definition of the term in its original section). However, this raises the question of how to correctly access the term from outside this section (note that the type changes according to the view).

For type-checking purposes, we need the ordered list of its free variables. The problem is that by visiting the term (recursively on its constants), we just obtain the UNORDERED list of its free variables. The only alternative to add the explicit list to each definition was that of exporting the order of definition of variables in the section. But this information is lees convincing: it could exists also other sequences of definitions that are, from the logical point of view, equivalent. So the exact order is only an accident.

Actually, the problem looks more general, and in particular it concerns the way we acces data in the system. COQ is mainly based on a functional "style". When we close a section we export (some of) the local definitions abstracting them w.r.t. the local variables. As a consequence, any time we access the term from "the outside" we have to explicitly pass arguments instantiating the "local" parameters.

We could imagine a discipline more akin to object oriented programming, where we can for instance open a section instantiating (some of) the parameters once and for all.

Does this eventually imply that we have to treat "sections" at the term level of the language? (this is still open, at the moment).

Addressing Variables

Remark: We have the first version of the whd reduction algorithm!)

Another problem is the addressing of variables. In the current representation, based on COQ, a variable is just a name Var(name); this contrasts with axioms and constants, which are represented by a pair (section_path, name). Section path is an absolute path; it essentially corresponds to our "Uniform Resource Identifier (URI)" with the only difference that we also add the "theory". For brevity, we shall use URI as a synonim of section path.

With our representation of the library, using just names for variables looks a bit problematic. Every time we have to resolve a reference to a variable we should visit the hierarchy of directories (sections) from the section where the variable appears to the outermost one (quite heavy).

What is the drawback to access variables via URI, in the same way as we do for constants? Well, in spite of axioms and constants, we have a rigid discipline for accessing variables. In particular, from a given section we can directly access variables declared in outermost sections, but not e.g. in siblings. With an URI this is not evident. In other words, we should add a consistency test. It is the usual problem: we are adding redundancy in our representation, and we must guarantee its internal coherence.

There is a third solution, however: using relative URIS encoded in a way that does not require any check. For example, instead of coding a variable A defined in a super-super-section as a relative URI "../../A.var" and check that the URI is well-formed according to the r.e. "(../)*{name}.var" we could encode the URI in this obvious way: (2,"A.var"). This is not forbidden by the RFC on URIs. The XML parser, than, should only check (as ever) that the number is a right number. The benefit of this solution is to stress the fact that some checks are performed on it.

Coming back to our representation of local terms in the previous section, we had also a list of variables. This list is now a list of relatives URI's (encoded as above, with integers).

Relative URI's are transformed in absolute ones during parsing. Having an explicit list of absolute URI's does simplify our life in accessing data "from outside", since we can easily recognize which variables are to be considered as "abstracted" at the current level, and thus passed as arguments.

Remark. Using receipes/cooked terms we have to do the previous computation once and for all when closing the section. With our approach, we must compute the "current view" of the term dynamically, each time we access it. How heavy is this? The "cache" does not help here. 


December 17 1999

We have the first version of the type-checker! Some aspects are still missing, notably:
  1. all "internal" checks on fixpoints, cofixpoints, inductive and coinductive definitions, etc...
  2. universes
The purpose of the current version of the type-checker was mainly to verify that all relevant information was present in the low-level XML representation. It is as simple as possible; in particular, perfomance has not be taken into account.

A lot of problems have been pointed out:

  1. Dummy parameters in inductive definitions. Dummy parameters are presented in the CIC reference manual as syntactic sugar. Surprisingly, this "syntactic sugar" generates two different internal representations!! (WHY!?!) As a consequence, we need a function that, given the definition of the inductive type, look if the parameter is dummy or not. (add details?). Question to Coq people: why not exporting a flag which says if the parameter is dummy or not?
  2. Parameters and arguments. In CIC, there is a distinction between parameters and arguments in inductive definitions. Initially, we tried to "guess" this information from the actual definition of the inductive type. However, we also need the same information during reduction, for mut_case (only the list of arguments, not the parameters, are passed to the body of the pattern). Our guess_arguments function is in the type-checker (and uses the environment of the type-checker). It was difficult to integrate with the reduction algorithm (which does not need such environment). Moreover, this was going to create a mutual dependency between reduction and type-checking, that we prefer to avoid. As a consequence, we now export in XML also the number of parameters.
  3. Fix and Cofix. During reduction (weak head), in order to preserve the termination of type-checking, we have to expand a Fix only if its inductive argument reduces to a constructor, and conversely expand a cofix only if it appears as an argument of a mut_case. This is clear, now, but we overlooked the point in the first draft; so it is worth to stress it.
  4. Prop, Set and Type. In some cases, Prop and Set are convertible with Type. For instance a function of type Type->Type can be applied to an element of type Prop or Set. But a function of type Prop->Prop cannot be applied to an element of type Type. In general, an element of type Type(i) is also of Type(i+1). So, the conversion function is not commutative, and we should change the order of parameters when we pass from a covariant to a controvariant position. This is not implemented yet (as everything concerning universes). At the moment, we identify all sorts.
  5. Cooking This has been the most difficult part so far. Remember that we cook "on the fly". Cooking a term amounts to abstract it w.r.t. its free variables, and then scanning the term replacing each constant with an application of the constant to the abstracted variable, in case the variable belongs to the free variables of the constant.

  6.  

     

    The first problem is that "free" is a relative notion. I.e. if we are cooking at a level defined by a given base URI, only variables below URI should be taken into account. This is not particularly problematic, but we have been forced to add the base URI as an explicit parameter to the type-checker.

    The second problem is that, so far, only some contstants had an explicit list of parameters. In particular, local definition had none: since they cannot be exported (and thus cannot be "cooked"), it seemed useless to add the list. The problem is that local definition do appears in exported ones: when cooking the exported definition, we need the list for the local ones in order to understand if it must be transformed into an application or not (things are even more complex in Coq, since we have odd notions such as "local to the current and the above section"). So, every constant must be equipped with its ordered list of free variables. Now, in order to compute the list for non local definitions, our approach was the following (this was done while exporting from Coq):

    1. keep the ordered list of all variable declarations (potential parameters).
    2. to each constant definition, associate the potential parameter to the constant.
    3. when meeting (at the top level) the most kooked version of the definition, trim the potential parameters w.r.t. the actual parameters (initial quantifications) of the most cooked version (actually this approach also contains a bug, since the less cooked definition may contain a "for all" on a variable with the same name of a potential parameter; this bug was DIFFICULT to track).
    In any case, we cannot adopt a similar approach for local definitions, since they are not cooked!

    In the current version we fix the problem with a terrific "patch": we associate to each local variable the "potential parameters"; then, at run tine, we scan the constant to get the list of its free variables and trim the potential parameters accordingly. This has been done just to check the correctness of the type-checker. Eventually, we have to change our strategy for exporting variables from Coq.

Some data

Finally, some data. The internal XML representation of the standard library of Coq takes ~ 80 Mega (4 Mega after compression: they are all Tags). It is pretty-printed, so a lot of space is taken by "blanks". Moreover, ~50% of it is "syntactic sugar", for readibility purposes. Is it worth to go towards a more compact representation? Maybe, but this topic looks a bit marginal at present.

The type-checker is slow. More or less ten time slower than Coq (maybe more). Remarkably, ~25% of the time is taken by the scanner (due to the dimension of the files) and ~25% of the time is taken by the (validating) XML parser.

January 10 2000

URI Manager

For modularity reasons, Uris's have been defined as an abstract datatype, with a suitable module to access them (UriManager). We have tested several different implementations for this module, with no major differences in performance (the time gained in type-checking is essentially lost during parsing). We shall probably retain the simplest one.

Local Definitions

The problem of variables for local definition described in the previous report has been fixed.The code that deduce from the list of possible occurring variables the list of actually occurring variables has been moved from the type-checker to a brand new command that must be used once and for all after the exporting of the local definition from Coq. The aim of the new command is to parse the definition, compute the list of occurring variables and write back the definition to XML. Hopefully, interacting with the developers of Coq, it will be possible to get rid of all this by exporting from Coq directly the list of actually occurring variables.

Cooking

Cooking is now done once and for all the first time we ask to typecheck some constant. The term is cooked at all possible levels, which are stored in the cache. The improvement in performance, although sensible, has been less impressive of what one could expect.

Getter

The getting mechanism has been entirely rewritten. Now we authomatically contact a list of http (tested) and ftp (to be tested) sites, and all missing files are downloaded. We use normal text file for exchange, and NDBM files to access the local databes associating URL to URI. Still to be solved (with Markus Mottl, author of Markup, the XML parser of OCaml) the problem of getting for absolute URI's in XML files (external entities and dtd declarations).

January 21 2000

Good news on the performance issue!! A simple euristic (initial check for structural equality during conversion) provided a drastical improvement in performance. We are now comparable with COQ. HELM is on its way. 

February 2 2000

A lot of improvements!!!

Internal representation

To cook the objects at every level only when we ask to type-check them (see January 10 2000), we had to change the internal representation of MutInd, MutConstruct, MutCase and Const, associating to the uri of the object they refer to an integer that is the number of times that the object must be cooked. So, for example, (Const "cic:/coq/INIT/Logic/not.con" 2) means the constant not cooked two times, i.e. the cooked constant not whose base uri is "cic:/coq/".

Here we have a space optimization problem: for example, if an object depends only on variables defined 4 levels and 7 levels above, then the objects cooked 0 to 3 times and that cooked 4 to 6 times are identical but for the number of times that an object is cooked in the above nodes (MutInd, MutConstruct, MutCase and Const). So the simple term (Const "cic:/coq/INIT/Logic/not.con" 0) whose base uri is cic:/coq/INIT/Logic and that does not depend on any variable, cooked one time becomes (Const "cic:/coq/INIT/Logic/not.con" 1). This means that we must store in cache terms that differ only for a few inessential informations, wasting space.

In fact, in order to solve the problem we have to do nothing at all! In fact, if an object O does not depend on variables one level above, thenevery object on which O depends can't depend on any variable at that level, and so is "equal" to the object cooked one time less, up to the new cooking numbers. Since this number are used only during delta-reduction and when fetching the type of cooked terms, it is not important that the numers are the right one, but only that they refer to an object undercooked that is essentially "equal" to the right one. So, in the starting example, we can store in cache only the object cooked 0, 4 and 7 times.

A bigger example could help to understand: let's define the sum and product of two natural numbers so that their uris are cic:/coq/INIT/Datatypes/sum.ind.xml and cic:/coq/INIT/Datatypes/prod.ind.xml. The definition of prod uses the definition of sum and neither one depends on any variable. So, once type-checked, we store them in cache cooked only 0 times and we will return them when they will be asked cooked n times, for each n. Now, let's define a new constant foo whose uri is cic:/examples/foo.con.xml which depends on prod. Obviously it depend on prod cooked 2 times. When, during type-checking, we'll ask the type of the constructors of prod cooked 2 times, we'll get a reference to sum cooked 0 times. But sum should be cooked 2 times, not 0! Neverthless, there is actually no real difference between sum cooked 0 and sum cooked 2 times, so the type-checking will not be affected at all by this.

Type-checker

All the checks required, but that on universes, has been implemented. Some definitions given in the Coq manual and in the availables papers on CIC are outdated for Coq. So we have had a lot of troubles in finding the good ones:
Strictly and weakly positiveness conditions for the constructors of an inductive type:
The definitions given in Coq manuals are no more the ones used by Coq. In fact, now it is possible to define a constructor for the inductive type t that takes in input a parameter of type (t' x1 ... xn t x'1 ... x'n) where t' is another inductive type iff t and t' fullfill some conditions. These conditions seem to have been never documented, so we have directly asked Hugo Herbelin for them.
Guarded by destructors and guarded by constructors conditions on Fix and CoFix definitions:
The definitions are given in a paper of Eduardo Gimenez (which one???), but only in the case of a single definition, without mutual recursion. We have extended the definitions to the mutual case. Than we have found out that Coq accepts an even wider set of definitions. We have then given a new definition, but we are not sure if it is equivalent to the one of system Coq, nor if the system is still logically consistent (but we strongly believe so!)
Up to now, during type-checking, each time a Fix or a CoFix is met, we check all the necessary conditions on it. During reduction it may be that the Fix or CoFix is replicated, i.e. substituted for a DeBrujin index. Then, when met a second time, all the checks are performed again, even the ones that will be surely passed. It should be possible to add a flag to each (Co)Fix, meaning if the checks have just been performed, but we have not tryed this yet.

Some data

Even with the new checks, the type-checker is really fast. Actually, the 90% of the time is spent during parsing and cooking (that seems to take no considerable time, less than 2%), and only the 10% of the time in the real type-checking. It is still to be understood how the euristic will behave when universes will be implemented.

CIC DTD

The DTD has been revised. In fact, only the names of some entities and attributes have been changed. Now the naming choices are more clear and omogeneus.

Interface

Up to now, the interface used was based on Gtk text widgets and we had a pretty printer that, given a Cic term, returns a string with the same ugly representation of Coq; the string was showed using the text widget.

Now we have a new very simple pretty-printer that, instead, of returning a string representation of the term, outputs a MathML file defining, through presentation markup, the same ugly presentation used before. We have also a new Gtk widget, inherited from the textual one, that, given the name of a file written in a subset of MathML, renders it. Actually, only mi, mo, mfenced and mtext tags can be used.

Visually, the two interfaces, the old one and the new one, are indistinguishable. The new interface is surely slower than the old one (the MathML file must be parsed, actually via Markup), but to the user it seems as fast as before.

Now we are ready to split and start working in parallel: one group will work on implementing from scratch a new Gtk widget to render MathML presentation markup and interact with it; the other one will focus on writing a transformation (via XSLT stylesheets) of Cic XML files into MathML content and presentation markup. The results of the work of the two groups will replace the new pretty printer and the small MathML Widget actually implemented. In the meanwhile we'll add universes to the type-checker.

March 3 2000

Stylesheets

Now we have two kind of output associated via stylesheets to the CIC XML files: the first one is a MathML file, in particular using MathML presentation markup elements (eventually embedded in HTML), and the second one is a HTML file. A scketch of the transformations on CIC XML files can be given by the following schema:
Figure 1: Transformations on CIC XML files: the backward arrow means that the MathML presentation files can be modified and these changes have effect also on the corresponding CIC XML files.
These two outputs are chosen in accordance with a cgi interface, passing some variables to a particular stylesheet. This first transformation creates in the CIC XML file the header for applying the right stylesheet. Then, as you can see in the schema, there are two transformations via XSLT: the first one produces MathML content markup and the second one produces either MathML Presentation markup or HTML markup. We can note that the MathML format can describe only CIC terms (expressions) and we have had to add a second language to describe CIC objects in the intermediate step.

The Use of MathML presentation markup for rendering purposes is justified by the aim of its specification consisting of providing a standard way to enable mathematics to be served, received and processed on the world wide web. On the contrary, the choice of having this intermediate format of MathML content markup is justified by several reasons:

  1. at this "semantic" level we can embed different theories coming from several formal systems (codified by different XML formats);
  2. MathML content markup, like the presentation markup, is intended to be a standard for encoding mathematical content, so this characteristic of portability can be useful to cut and paste terms from an environment to another;
  3. this level simplifies the structure of a CIC term: there is no more syntactic sugar, but only "expressions";
  4. at this level we can capture the semantic of particular terms, marking them with more specific elements.

MathML limits

During the realization of the stylesheets, we have had to face and solve several problems mainly connected to the MathML specification.
  1. As we have said, MathML content elements don't describe objects like the ones defined in CIC. CIC objects like definitions, axioms, proofs and variable definitions, can be considered either like terms or expressions and thence MathML elements, or part of the structure of the mathematical document because they are typical of the CIC formal system and they haven't a general meaning. The solution to this problem, has been adding to the MathML content markup a XML level which describes the CIC objects in the intermediate step (the eventual mixing will be possible using the W3C namespaces).
  2. For most CIC terms there aren't specific MathML content markup elements, since we are semantically describing particular terms of a formal system instead of mathematical formulas. To fix this problem we are using the "csymbol" markup, that is used for constructing a symbol whose semantic is not part of the core content elements provided by MathML, but defined externally. On the other hand, we'll produce, via stylesheets, specific MathML content markup elements that may describe the corresponding mathematical symbols defined in the extensions of the CIC language. These symbols, belonging to a particular theory defined in the CIC formal system, can be better described and rendered by the correspoding MathML elements.
  3. Another problem strictly connected to the previous one regards the description of the types. There isn't any MathML content feature to describe or assign a type. We have had to introduce a provisional markup "type", out of the MathML specification, to assign a type to a variable. Our next step will be to remind to the MathML Working Group the importance of implementing a set of content elements for describing at least the basic types of the Type Theory and Set Theory and a feature to assign a type to a variable.
  4. As you can see in the schema, we produce MathML content and presentation in two distinct step. The only way to combine and link together content and presentation in compliance to the MathML specification consists of using the "semantics" element. This element is quite ambiguous for its use, as a kind of "bridge" element between content and presentation, and for this reason it isn't recognized by any browser implementing MathML. This element will be ignored by our widget which try to render only its first child throwing away the other children. In our opinion, the next natural step for the MathML specification should consist of having content and the associated presentations in different files (documents), one for the content expression and one for each presentation. Then we need to have links from a presentation expression and subexpressions to the respective content expression and subexpressions. This can be achieved using the "id" attribute and instead of the cross reference "xref", using the machinery of Xlink. For instance, the already existent attribute "href" can be used in a presentation element to fullfill this aim.
  5. The above solution could make easier the implementation of the (hyper)links for linking the original CIC XML terms to the content and presentation corresponding elements. In this way the user can browse the HTML and the MathML presentation files, and can also modify the MathML presentation output and these changes will have effect on the corresponding CIC XML file. As we have said above, this can be achieved using the machinery of Xlink.
  6. Finally, we have reckoned with the limits of the main tools implementing MathML. We have not found a browser implementing the whole MathML reccomendation: in fact we have even not found a browser able to render all the presentation elements (and their attributes)!!! So, waiting implementations, we have decided to produce also HTML pages with an applet inside to render the presentation markup. The applet is the WebEQ one, that recognize almost all of the presentation elements, even if it doesn't recognize all the attributes. In order not to download the applet classes every time it is possible to "install" the applet downloading it from the WebEQ site once and for all.

April 14 2000

Two different interfaces

We need to use XSLT stylesheets and MathML browsers, but:
  1. There is no browser actually implementing the full XSLT reccomendation
  2. There is no browser actually implementing the full MathML recoomendation
We also want any user with a web-space (either an http or an ftp one) to be able to publish it's documents. So we have chosen to provide two diffente interfaces to the library: the first one will be server based and will require the XML publishing framework Cocoon (see http://xml.apache.org) on the server and a simple MathML aware browser on the client; the second one will require a simple http or ftp server, but a fairly complex browser on the client side able either to render MathML presentation markup and to apply XSLT stylesheets to XML documents. This browser is the browser we have been implementing.

Server side interface

The library is online! We are using Cocoon to apply the stylesheets, some CGI to browse the library and do dirty tricks and some XSP to decouple the XML files to the stylesheets that must be applied. Eventually, when Cocoon 2.0 will be released, we won't need the XSP stuff any more.

Client side interface

A lot of changes! The code of the interface has been entirely rewritten from scratch. During the development we have encountered a lot of problems due to the binding of Gtk to OCaml. In particular some changes have been done to the binding. Now we are trying to put our changes in the main distribution

In particular now we have a more powerful MathML widget and a brand new bind to an XSLT processor:

MathML widget

We still don't have the real MathML widget (but we are working on it: up to now we have written about 10.000 lines of C++ code). The small widget we had built has been improved so that now it is possible to select a subexpression or click on it. Selecting a subexpression means being able to get from the widget an XPath individuating a node of the CIC DOM tree. The selection (and so the clicking) are simulated (the user writes the XPath).

XSLT processor

Now that we have real stylesheets we need a real XSLT processor. There isn't any one yet fully compliant with the reccomendation and written in C, C++ or OCaml (for efficiency reasons). So we have chosen to use Xalan (see http://xml.apache.org) that is written in Java (and is quite slow) because it will be ported (hoping soon) to C++.

We can't load the JVM every time we need to process a file. Moreover, reading and preprocessing our stylesheets require a lot of time to Xalan (about 1 minute). So we have written a small UDP daemon that at startup preprocesses the stylesheets and then waits for requests: a request specifyes the name of the file on which the stylesheets must be applied. The daemon applyes the stylesheet, writes the result on disk and answers the client. The whole operation requires still too much time (a small theorem requires up to 15 seconds to be processed). Using a cache is not maneageable (the MathML files requires about 12 times the space of the original CIC files). The only hope is waiting for the C++ porting

Annotations

The CIC DTD has been modified so that now every element could have an annotation child. The client-side interface has been modified so that it now can be used to annotate a term (actually writing in a text-field the XML content of the annotation child). The stylesheets have been modified so that now, if an element has an annotation, the annotation is displayed instead of the term. Now we must develop two things:
  1. A language for structuring annotations

  2. Structuring an annotation means describing how annotations of subterms can be used to annotate a superterm. For example an annotation of a lambda-abstraction "\lambda P:Prop.T" could be " let P be a proposition. Then <sub/>" where <sub> must be substituted during rendering (= application of the stylesheet from CIC to MathML content) with the annotation of the subterm T (or with the subterm itself if no annotation is provided)
  3. A language for the annotations content

  4. The annotations could not be only simple text. In particular they must be written in a structured language where MathML could be embedded. The right choose seems XHTML, but no decision has been taken yet.

May 8 2000

Configuration

Now there is an xml file in which we have put all the configuration stuff. Now all the executables are easily installable on any machine, but are still inherently single-user (for examples, what if two users want to annotate the same file in different ways?). This has to be addressed sooner or later.

Stylesheet for the theory level

Now we have also the first stylesheet for the theory level. The time taken to apply the stylesheet even to simple theories is really too high: about 2 minutes for a theory with about 20 objects.

May 12 2000

A lot of changes:

May 26 2000

We have spent quite a lot of time in preparing a couple of articles describing the current state of the project.

Some issues on the use of stylesheets

There are some interesting issues on the use of stylesheets that are emerging.

Great changes in the client-side interface

June 24 2000

Other time spent in preparing articles describing the current state of the project.

Stylesheets:

June 29 2000

Due to serious hardware problems of our server, the library is no more available on-line and we have to stop working on the on-line interface. So, we'll focus on our interface while waiting for the new server.

January 15 2001

A lot of changes in all parts of project HELM, mainly due to the availability of a new beta release for Coq: We are now working on performances, bug fixing and stability issues to get a first beta release, that we hope will be available together with the final version of Coq V7.0.