Subject: C424 JGSI Review Resent-Date: Thu, 30 Sep 1999 23:18:57 -0400 Resent-From: Geoffrey Fox Resent-To: p_gcf@npac.syr.edu Date: Mon, 20 Sep 1999 00:39:28 -0600 From: "David S. Dixon" Organization: Least Squares Software, LLC To: Geoffrey Fox 1. Overall ******** An interesting, illuminating and important work. The approach to measuring performance of interpreted versus compiled execution is unique and thoughtful. Very well chosen examples of the circumstances in which compiling does or does not improve execution. Good additional comments on using performance data to isolate performance problems independent of compilation (such as frequent object creation, heavy I/O, and many small methods). The paper is short on the numbers, statistics and methodologies to support specific claims of some timings (for example, Table 2, Table 3, and the object creation time of 1.57 seconds in paragraph 3, page 14). 2. Comments for Author(s) ********************* The conversational style is fine, but the frequent use of possessives (i.e."the class' constantpool", rather than "the constantpool of the class") makes for occasionally awkward sentence structure. Two important points were big question marks until page 10 and should have been made much earlier: 1) The instrument points are: method entry, method exit, and method calls. 2) One very good reason for simulating JIT compilation is the possibility that the compiler will reorder or optimize away the instrumentation code. Two additional issues with replacing the compilation step with a set wait: 1) What statistics were used in determining a valid wait time? 2) What are the timing differences between a stand-alone compilation and a JIT compilation in terms of resource contention, memory utilization, and pre-compilation by the interpreter? It should be simple to set up a test that would compare the simulated compiler with a real JIT and produce interesting results. * Missing article - Last sentence of the Abstract should be "Results of our work are a guide..." * Spelling error and number disagreement - Last sentence of bullet #3 on page 2 should be "...help a programmer better understand an application's execution." * Missing comma - Caption on Figure 1 should be "...compiled executions, methods may be..." * Run-on sentence - Last sentence on page 2, beginning "Results from our study demonstrate..." * Time-line, acting character confusion - Top of page 8, it is unclear what actor changes what instructions when. * Repeated phrase - Top of page 9, double indirection "...SPARC instrumentation code to generate instrumentation code for..." If this is intentional, it's confusing. * Extra word - First sentence, para. 3, page 9, should be "...get the VM to load..." * Incorrect article - Third sentence, para. 3, page 9, should be "An alternative is to find the..." * Confusing tense and word order - Fifth sentence on, para. 3 and 4, page 9. It is unclear why one alternative is preferred, which alternative was actually selected and why, and what function the "type tags associated with AP and VM resources" have given the "main method" alternative over the "VM" alternative. It is unclear if the VM was instrumented or not. * Incorrect term - Last two sentences, para. 1, page 11, both instances of "execution time" should be replaced with "CPU time". In fact, according to the graph, the execution time actually increased with the compiled version. The graph implies that the interpreted code completed 1.05(1.18 peak) iterations per second, while the compiled code completed 0.523 (0.823peak) iterations per second. Why is this? JNI overhead? Worth some discussion. * Missing units - Table 2, page 13, what are the units? * Missing units - Table 3, page 14, what are the units? * Missing source - Table 3, page 14, and top of page 15, the MethodCallTime of 2.5 microseconds doesn't seem to come from anywhere. * Missing source - First sentence, para. 3, page 14, the number 1.57 doesn't seem to come from anywhere. 3. Comments for Editor(s) ******************** None.