23 Apr 1998 | Initial version |
25 May 1998 | Clarify definition for text node handling |
12 Jun 1998 | Clean up for external release |
19 Aug 1998 | Some modification for DOM Level 1 Proposed Recommendation |
22 Jan 1999 | Namespace support incorporated (Internet Draft version) |
24 Feb 1999 | Changes about Comment and ProcessingInstruction |
This document is intended to contribute discussions how digest (hash) values should be defined for general DOM structures. See Document Object Model (DOM) Level 1 Specification Version 1.0 for the specifications of W3C DOM 1.0.
The purpose of this document is to give a clear and unambiguous definition of digest (hash) values of the XML objects. In particular, we propose to add a new API getDigest() to the interface Node that returns a digest value, a fixed length value (normally 128 bits or 160 bits) representing an entire subtree. Two subtrees are considered identical if their hash values are the same, and different if their hash values are different.
There are at least two usage scenarios of DOMHASH. One is as a basis for the Digital Signature for XML (XMLDSIG) proposal. Digital signature algorithms normally require hashing a signed content before signing. DOMHASH provides a concrete definition of the hash value calculation.
The other is to use DOMHASH when synchronizing two DOM structures. Suppose that a server program generates a DOM structure which is to be rendered by clients. If the server makes frequent small changes on a large DOM tree, it is desirable that only the modified parts are sent over to the client. A client can initiate a request by sending the root hash value of the structure in the cache memory. If it matches with the root hash value of the current server structure, nothing needs be sent. If not, then the server compares the client hash with the older versions in the server's cache. If it finds one that matches the client's version of the structure, then it locates differences with the current version by recursively comparing the hash values of each node. This way, the client can receive only an updated portion of a large structure without requesting the whole thing. A similar idea of minimizing the network communication for data replication was proposed in The HTTP Distribution and Replication Protocol.
One way of defining digest values is to take a surface string as the input for a digest algorithm. However, this approach has several drawbacks. The same internal DOM structure may be represented in may different ways as surface strings even if they strictly conform to the XML specification. Treatment of white spaces, selection of character encodings, entity references (i.e., use of ampersands), and so on have impact on the generation of a surface string. If the implementations of surface string generation are different, the hash values would be different, resulting in unvalidatable digital signatures and unsuccessful detection of identical DOM structures. Therefore, it is desirable that digest of DOM is defined in the DOM terms -- that is, as an unambiguous algorithm using the DOM API. This is the approach we take in this proposal.
Introduction of namespace is another source of variation of surface string because different namespace prefixes can be used for representing the same namespace URI. In the following example, the namespace prefix "edi" is bound to the URI "http://ecommerce.org/schema" but this prefix can be arbitrary chosen without changing the logical contents as shown in the second example.
<?xml version="1.0"?> <root xmlns:edi='http://ecommerce.org/schema'> <edi:order> : </edi:order> </root> |
<?xml version="1.0"?> <root xmlns:ec='http://ecommerce.org/schema'> <ec:order> : </ec:order> </root> |
The DOMHash defined in this document is designed so that
the choice of the namespace prefix does not affect the
digest value. In the above example, both the
"root" elements will get the same digest value.
Hash values are defined on the DOM type Node. We consider the following five node types that are used for representing a DOM document structure:
Element
Attr
ProcessingInstruction
Text
(including the subtype CDATASection
)Comment nodes and Document Type Definitions (DTDs) do not participate in the digest value calculation. This is because DOM does not require a conformant processor to create data structures for these. DOMHash is designed so that it can be computed with any XML processor conformant to the DOM or SAX specification.
Nodes with the node type EntityReference are assumed to be expanded before digest calculation.
The digest values are defined recursively on each level of the DOM tree so that only a relavant part needs to be recalculated when a small portion of the tree is changed.
Below, we give the precise definitions of digest for these types. We describe the format of the data to be supplied to a hash algorithm using a figure and a simple description, followed by a Java code fragment using the DOM API and the JDK 1.1 Platform Core API only. Therefore, the semantics should be unambiguous.
As the rule of thumb, all strings are to be in UTF-16 in the network
byte order (Big Endian) with no byte order mark.
If there is a sequence of Text
nodes without
any element nodes inbetween, these text nodes are merged into one by
concatenating them.
A zero-length Text
node is always ignored.
To avoid the dependence on the namespace prefix, we use "expanded names" to do digest calculation. If an element name or an attribute name is qualified either by a explicit namespace prefix or by a default namespace, the name's LocalPart is prepended by the URI of the namespace (the namespace name as defined in the NameSpace specification) and a colon before digest calculation. In the following example, the default qualified name "order" is expanded into "http://ecommerce.org/schema:order" while the explicit qualified name "book:title" is exapanded into "urn:loc.gov:books:title" before digest calculation.
<?xml version="1.0"?> <root xmlns='http://ecommerce.org/schema' xmlns:book='urn:loc.gov:books'> <order> <book:title> ... </book:title> : </order> </root> |
We define an expanded name (either for element or attirbute) as follows:
In the following definitions, we assume that the getExpandedName() method (which returns the expanded name as defined above) is defined in both Element and Attr interfaces of DOM.
Note that the digest values are not defined on namespace declarations. In other words, the digest value is not defined for an attribute when
In the above example, the two attributes which are namespace declarations do not have digest values and therefore will not participate in the calculation of the digest value of the "root" element.
The code fragments in the definitions below
assume that they are in implementation classes of Node. Therefore,
a methods call without an explicit object reference is for the Node itself.
For example, getData() returns the text data of the current node
if it is a Text node. The parameter digestAlgorithm is
to be replaced by an identifier of the digest algorithm, such as "MD5"
and "SHA"
.
The computation should begin with a four byte integer that represents
the type of the node, such as Node.TEXT_NODE
or Node.ELEMENT_NODE
.
Text
NodesThe hash value of a Text
node is computed on the four byte header followed
by the UTF-16 encoded text string.
Node.TEXT_NODE (3) in 32 bit network-byte-ordered integer |
Text data in UTF-16 stream (variable length) |
public byte[] getDigest(String digestAlgorithm) { MessageDigest md = MessageDigest.getInstance(digestAlgorithm); md.update((byte)((Node.TEXT_NODE >> 24) & 0xff)); md.update((byte)((Node.TEXT_NODE >> 16) & 0xff)); md.update((byte)((Node.TEXT_NODE >> 8) & 0xff)); md.update((byte)(Node.TEXT_NODE & 0xff)); md.update(getData().getBytes("UnicodeBigUnmarked")); return md.digest(); }
Here, MessageDigest is in the package java.security.*, one of the built-in packages of JDK 1.1.
ProcessingInstruction
NodesA ProcessingIinstruction
(PI) node has two components:
the target and the data.
PI data is from the first non white space character after the target to the character
immediately preceding the ?>.
A digest value for <?foo param?> is the same as value for
<?foo param?>.
Node.PROCESSING_INSTRUCTION_NODE (7) in 32 bit network-byte-ordered integer | |||
PI target in UTF-16 stream (variable length) | |||
0x00 | 0x00 | ||
PI data in UTF-16 stream (variable length) |
public byte[] getDigest(String digestAlgorithm) { MessageDigest md = MessageDigest.getInstance(digestAlgorithm); md.update((byte)((Node.PROCESSING_INSTRUCTION_NODE >> 24) & 0xff)); md.update((byte)((Node.PROCESSING_INSTRUCTION_NODE >> 16) & 0xff)); md.update((byte)((Node.PROCESSING_INSTRUCTION_NODE >> 8) & 0xff)); md.update((byte)(Node.PROCESSING_INSTRUCTION_NODE & 0xff)); md.update(getName().getBytes("UnicodeBigUnmarked")); md.update((byte)0); md.update((byte)0); md.update(getData().getBytes("UnicodeBigUnmarked")); return md.digest(); }
Attr
NodesThe digest value of Attr
nodes are defined similarly to PI
nodes, except that we need a separator between the expanded attribute name
and the attribute value. The '0x0000' value in UTF-16 is allowed
nowhere in an XML document, so it can serve as an unambiguous
separator.
The expanded name must be used
as the attribute name because it may be qualified.
Note that if the attribute
is a namespace declaration (either the attribute name is "xmlns"
or its prefix is "xmlns"), the digest value is
undefined and the getDigest() method should return
null.
Node.ATTRIBUTE_NODE (2) in 32 bit network-byte-ordered integer | |||
Expanded attribute name in UTF-16 stream (variable length) | |||
0x00 | 0x00 | ||
Attribute value in UTF-16 stream (variable length) |
public byte[] getDigest(String digestAlgorithm) { if (getNodeName().equals("xmlns") || getNodeName().startsWith("xmlns:")) return null; MessageDigest md = MessageDigest.getInstance(digestAlgorithm); md.update((byte)((Node.ATTRIBUTE_NODE >> 24) & 0xff)); md.update((byte)((Node.ATTRIBUTE_NODE >> 16) & 0xff)); md.update((byte)((Node.ATTRIBUTE_NODE >> 8) & 0xff)); md.update((byte)(Node.ATTRIBUTE_NODE & 0xff)); md.update(getExpandedName().getBytes("UnicodeBigUnmarked")); md.update((byte)0); md.update((byte)0); md.update(getValue().getBytes("UnicodeBigUnmarked")); return md.digest(); }
Element
NodesElement
nodes are the most complex because they consist of other nodes
recursively. Hash values of these component nodes are used to calculate
the node's digest so that we can save computation when the structure is
partially changed.
First, all the attributes except for namespace declarations must be collected. This list is sorted by the expanded attribute names. The sorting is done in ascending order in terms of the UTF-16 encoded expanded attribute names, using the string comparison operator defined as String#compareTo() in Java. The semantics of this sorting operation should be clear (no "ties" are possible because of the unique attribute name constraint).
Child nodes does not contain Comment nodes because SAX applications can not catch comment information.
Node.ELEMENT_NODE (1) in 32 bit network-byte-ordered integer | |||||||
Expanded element name in UTF-16 stream (variable length) | |||||||
0x00 | 0x00 | ||||||
A number of non-namespace-declaration attributes in 32 bit network-byte-ordered unsigned integer | |||||||
Sequence of digest values of non-namespace-declaration attributes, sorted by String#compareTo() for attribute names | |||||||
A number of child nodes except Comment nodes
in 32bit network-byte-ordered unsigned integer |
|||||||
Sequence of digest values of each child
nodes except Comment nodes (variable length) |
Text
nodes and Comment
node are not counted as child)
public byte[] getDigest(String digestAlgorithm) { MessageDigest md = MessageDigest.getInstance(digestAlgorithm); ByteArrayOutputStream baos = new ByteArrayOutputStream(); DataOutputStream dos = new DataOutputStream(baos); dos.writeInt(Node.ELEMENT_NODE); // This is stored in the network byte order dos.write(getExpandedName().getBytes("UnicodeBigUnmarked")); dos.write((byte)0); dos.write((byte)0); // Collect all attributes except for namespace declarations NamedNodeMap nnm = this.getAttributes(); int len = nnm.getLength() // Find "xmlns" or "xmlns:foo" in nnm and omit it. ... dos.writeInt(len); // This is sorted in the network byte order // Sort attributes by String#compareTo() on expanded attribute names. ... // Assume that `Attr[] aattr' has sorted Attribute instances. for (int i = 0; i < len; i ++) dos.write(aattr[i].getDigest(digestAlgorithm)); Node n = this.getFirstChild(); // Assume that adjoining Texts are merged and no 0-length Text and no Comment nodes. len = this.getChildNodes().getLength(); dos.writeInt(len); // This is stored in the network byte order while (n != null) { dos.write(n.getDigest(digestAlgorithm)); n = n.getNextSibling(); } dos.close(); md.update(baos.toByteArray()); return md.digest(); }
We propose to add a new method to the Node interface as shown below. The getDigest() method takes one string as its parameter that specifies the digest algorithm. We assume that at least two algorithms, "MD5" and "SHA", must be implemented for any DOM processor to be compliant with DOMHASH.
typedef sequence<octet> bytearray; interface Node { // NodeType const unsigned short ELEMENT_NODE = 1; const unsigned short ATTRIBUTE_NODE = 2; const unsigned short TEXT_NODE = 3; const unsigned short CDATA_SECTION_NODE = 4; const unsigned short ENTITY_REFERENCE_NODE = 5; const unsigned short ENTITY_NODE = 6; const unsigned short PROCESSING_INSTRUCTION_NODE = 7; const unsigned short COMMENT_NODE = 8; const unsigned short DOCUMENT_NODE = 9; const unsigned short DOCUMENT_TYPE_NODE = 10; const unsigned short DOCUMENT_FRAGMENT_NODE = 11; const unsigned short NOTATION_NODE = 12; readonly attribute DOMString nodeName; attribute DOMString nodeValue; // raises(DOMException) on setting // raises(DOMException) on retrieval readonly attribute unsigned short nodeType; readonly attribute Node parentNode; readonly attribute NodeList childNodes; readonly attribute Node firstChild; readonly attribute Node lastChild; readonly attribute Node previousSibling; readonly attribute Node nextSibling; readonly attribute NamedNodeMap attributes; readonly attribute Document ownerDocument; Node insertBefore(in Node newChild, in Node refChild) raises(DOMException); Node replaceChild(in Node newChild, in Node oldChild) raises(DOMException); Node removeChild(in Node oldChild) raises(DOMException); Node appendChild(in Node newChild) raises(DOMException); boolean hasChildNodes(); Node cloneNode(in boolean deep); bytearray getDigest(in DOMString digestAlgorithm); };
The definition described above can be efficiently implemented. XML Parser for Java has a reference implementation with the source code.
Please forward any comments and suggestions to