19 Aug 1998 | Some modification for DOM Level 1 Proposed Recommendation |
12 Jun 1998 | Clean up for external release |
23 Apr 1998 | Initial version |
This document is intended to contribute discussions how digest (hash) values should be defined for general DOM structures. See W3C Document Object Model Specification 1.0 Draft for the latest draft specifications of DOM 1.0.
The purpose of this document is to give a clear and unambiguous definition of digest (hash) values of the XML objects. In particular, we propose to add a new API getDigest() to the interface Node that returns a digest value, a fixed length value (normally 128 bits or 160 bits) representing an entire subtree. Two subtrees are considered identical if their hash values are the same, and different if their hash values are different.
There are at least two usage scenarios of DOMHASH. One is as a basis for the Digital Signature for XML (XMLDSIG) proposal. Digital signature algorithms normally require hashing a signed content before signing. DOMHASH provides a concrete definition of the hash value calculation.
The other is to use DOMHASH when synchronizing two DOM structures. Suppose that a server program generates a DOM structure which is to be rendered by clients. If the server makes frequent small changes on a large DOM tree, it is desirable that only the modified parts are sent over to the client. A client can initiate a request by sending the root hash value of the structure in the cache memory. If it matches with the root hash value of the current server structure, nothing needs be sent. If not, then the server compares the client hash with the older versions in the server's cache. If it finds one that matches the client's version of the structure, then it locates differences with the current version by recursively comparing the hash values of each node. This way, the client can receive only an updated portion of a large structure without requesting the whole thing. A similar idea of minimizing the network communication for data replication was proposed in The HTTP Distribution and Replication Protocol.
One way of defining digest values is to take a surface string as the input for a digest algorithm. However, this approach has several drawbacks. The same internal DOM structure may be represented in may different ways as surface strings even if they strictly conform to the XML specification. Treatment of white spaces, selection of character encodings, entity references (i.e., use of ampersands), and so on have impact on the generation of a surface string. If the implementations of surface string generation are different, the hash values would be different, resulting in unvalidatable digital signatures and unsuccessful detection of identical DOM structures.
Therefore, it is desirable that digest of DOM is defined in the DOM terms -- that is, as an unambiguous algorithm using the DOM API. This is the approach we take in this proposal.
Hash values are defined on the DOM type Node. We consider the following five node types that are used for representing a DOM document structure:
Note: It is not simple to define hash values for DTD. At this moment, we do not define hash values for the Document, DocumentType, Entity, and so on, to avoid this complication. Future release will cover this point.
Below, we give the precise definitions of digest for these types. We describe the format of the data to be supplied to a hash algorithm using a figure and a simple description, followed by a Java code fragment using the DOM API and the JDK 1.1 Platform Core API only. Therefore, the semantics should be unambiguous.
As the rule of thumb, all strings are to be in UTF-16 in the network byte order (Big Endian) with no byte order mark. If there is a sequence of text nodes without any element nodes inbetween, these text nodes are merged into one by concatenating them. A zero-length text node is always ignored.
The code fragments in the definitions below assume that they are in implementation classes of Node. Therefore, a methods call without an explicit object reference is for the Node itself. For example, getData() returns the text data of the current node if it is a Text node. The parameter digestAlgorithm is to be replaced by an identifier of the digest algorithm, such as "MD5" and "SHA".
The computation should begin with a four byte integer that represents the type of the node, such as Text or Element.
The hash value of a TEXT node is computed on the four byte header followed by the UTF-16 encoded text string.
Node.TEXT_NODE (3) in 32 bit network-byte-ordered integer |
Text data in UTF-16 stream (variable length) |
public byte[] getDigest(String digestAlgorithm) { MessageDigest md = MessageDigest.getInstance(digestAlgorithm); md.update((byte)((Node.TEXT_NODE >> 24) & 0xff)); md.update((byte)((Node.TEXT_NODE >> 16) & 0xff)); md.update((byte)((Node.TEXT_NODE >> 8) & 0xff)); md.update((byte)(Node.TEXT_NODE & 0xff)); md.update(getData().getBytes("UnicodeBigUnmarked")); return md.digest(); }
Here, MessageDigest is in the package java.security.*, one of the built-in packages of JDK 1.1.
Comment nodes are similar to Text nodes except for the header.
Node.COMMENT_NODE (8) in 32 bit network-byte-ordered integer |
Comment data in UTF-16 stream (variable length) |
public byte[] getDigest(String digestAlgorithm) { MessageDigest md = MessageDigest.getInstance(digestAlgorithm); md.update((byte)((Node.COMMENT_NODE >> 24) & 0xff)); md.update((byte)((Node.COMMENT_NODE >> 16) & 0xff)); md.update((byte)((Node.COMMENT_NODE >> 8) & 0xff)); md.update((byte)(Node.COMMENT_NODE & 0xff)); md.update(getData().getBytes("UnicodeBigUnmarked")); return md.digest(); }
A ProcessingInstruction node has two components: the target and the data. Accordingly, the hash is computed on the concatenation of both. Note that the data contains the leading space character so there is no ambiguity even if there is no separator between the target and the data.
Node.PROCESSING_INSTRUCTION_NODE (7) in 32 bit network-byte-ordered integer |
PI target and data in UTF-16 stream (variable length) |
public byte[] getDigest(String digestAlgorithm) { MessageDigest md = MessageDigest.getInstance(digestAlgorithm); md.update((byte)((Node.PROCESSING_INSTRUCTION_NODE >> 24) & 0xff)); md.update((byte)((Node.PROCESSING_INSTRUCTION_NODE >> 16) & 0xff)); md.update((byte)((Node.PROCESSING_INSTRUCTION_NODE >> 8) & 0xff)); md.update((byte)(Node.PROCESSING_INSTRUCTION_NODE & 0xff)); md.update(getName().getBytes("UnicodeBigUnmarked")); md.update(getData().getBytes("UnicodeBigUnmarked")); return md.digest(); }
Attribute Nodes are similar to ProcessingInstruction Nodes, except that we need a separator between the attribute name and the attribute value. Note that the '0x0000' value in UTF-16 is allowed nowhere in an XML document, so it can serve as an unambiguous separator.
Node.ATTRIBUTE_NODE (2) in 32 bit network-byte-ordered integer | |||
Attribute name in UTF-16 stream (variable length) | |||
0x00 | 0x00 | ||
Attribute value in UTF-16 stream (variable length) |
public byte[] getDigest(String digestAlgorithm) { MessageDigest md = MessageDigest.getInstance(digestAlgorithm); md.update((byte)((Node.ATTRIBUTE_NODE >> 24) & 0xff)); md.update((byte)((Node.ATTRIBUTE_NODE >> 16) & 0xff)); md.update((byte)((Node.ATTRIBUTE_NODE >> 8) & 0xff)); md.update((byte)(Node.ATTRIBUTE_NODE & 0xff)); md.update(getName().getBytes("UnicodeBigUnmarked")); md.update((byte)0); md.update((byte)0); // getValue() returns String instance in WD-DOM-19980416 md.update(getValue().getBytes("UnicodeBigUnmarked")); return md.digest(); }
Element nodes are the most complex because they consist of other nodes recursively. Hash values of these component nodes are used to calculate the node's digest so that we can save computation when the structure is partially changed.
One delicate point in this definition is that we need to sort the attributes by the attribute names. This is done using the string comparison operator defined as String#compareTo() in Java. The sorting is done in ascending order in terms of the UTF-16 encoded attribute names. The semantics of this sorting operation should be clear.
Node.ELEMENT_NODE (1) in 32 bit network-byte-ordered integer | |||||||
Element name in UTF-16 stream (variable length) | |||||||
0x00 | 0x00 | ||||||
A number of attributes in 32 bit network-byte-ordered unsigned integer | |||||||
Sequence of digest values of attribute, sorted by String#compareTo() for attribute names | |||||||
A number of child nodes in 32bit network-byte-ordered unsigned integer | |||||||
Sequence of digest values of each child nodes (variable length) |
(A sequence of child texts is merged to one text. A zero-length text is not counted as child)
public byte[] getDigest(String digestAlgorithm) { MessageDigest md = MessageDigest.getInstance(digestAlgorithm); ByteArrayOutputStream baos = new ByteArrayOutputStream(); DataOutputStream dos = new DataOutputStream(baos); dos.writeInt(Node.ELEMENT_NODE); // This is stored in the network byte order dos.write(getName().getBytes("UnicodeBigUnmarked")); dos.write((byte)0); dos.write((byte)0); NamedNodeMap ni = getAttributes(); int len = ni.getLength() dos.writeInt(len); // This is sotred in the network byte order // Sort attributes by String#compareTo(). ... // Assume that `Attribute[] aattr' has sorted Attribute instances. for (int i = 0; i < len; i ++) dos.write(aattr[i].getDigest(digestAlgorithm)); NodeList nl = getChildNodes(); // Assume that adjoining Texts are merged and no 0-length Text. len = nl.getLength(); dos.writeInt(len); // This is stored in the network byte order for (int i = 0; i < len; i ++) dos.write(nl.item(i).getDigest(digestAlgorithm)); dos.close(); md.update(baos.toByteArray()); return md.digest(); }
We propose to add a new method to the Node interface as shown below. The getDigest() method takes one string as its parameter that specifies the digest algorithm. We assume that at least two algorithms, "MD5" and "SHA", must be implemented for any DOM processor to be compliant with DOMHASH.
public interface Node { // NodeType public static final short ELEMENT_NODE = 1; public static final short ATTRIBUTE_NODE = 2; public static final short TEXT_NODE = 3; public static final short CDATA_SECTION_NODE = 4; public static final short ENTITY_REFERENCE_NODE = 5; public static final short ENTITY_NODE = 6; public static final short PROCESSING_INSTRUCTION_NODE = 7; public static final short COMMENT_NODE = 8; public static final short DOCUMENT_NODE = 9; public static final short DOCUMENT_TYPE_NODE = 10; public static final short DOCUMENT_FRAGMENT_NODE = 11; public static final short NOTATION_NODE = 12; public String getNodeName(); public String getNodeValue(); public void setNodeValue(String arg); public short getNodeType(); public Node getParentNode(); public NodeList getChildNodes(); public Node getFirstChild(); public Node getLastChild(); public Node getPreviousSibling(); public Node getNextSibling(); public NamedNodeMap getAttributes(); public Document getOwnerDocument(); public Node insertBefore(Node newChild, Node refChild) throws DOMException; public Node replaceChild(Node newChild, Node oldChild) throws DOMException; public Node removeChild(Node oldChild) throws DOMException; public Node appendChild(Node newChild) throws DOMException; public boolean hasChildNodes(); public Node cloneNode(boolean deep); public byte[] getDigest(String digestAlgorithm); };
The definition described above can be efficiently implemented. XML for Java has a reference implementation with the source code.
Please forward any comments and suggestions to: