You can edit almost every page by Creating an account. Otherwise, see the FAQ.

On-stack replacement

From EverybodyWiki Bios & Wiki

An on-stack replacement (OSR) is any mechanism that enables a virtual machine to transfer execution between different versions of the same method body even while a method runs.[1] The "stack" on the name refers to the call stack, so intuitively on-stack replacement allows the a method in the call stack to be replaced by a different implementation during run time.

On-stack replacement extends the applicability of the following transformation / features[1]

  • deferred compilation to improve compiler speed and/or code quality
  • debugging of optimized code via de-optimization
  • online optimization activation containing long running loops
  • optimization and code generation based on speculative program invariants.

On-stack replacement and its role enabling other optimizations[edit]

Deferred compilation[edit]

The idea behind deferred compilation is that there are some method paths which are executed only under extremely rare circumstances. Therefore it is not worth to spend time compiling these rarely used paths. Only when this path is executed will the compiler actually generate code for these rarely taken paths.[2]

Without on-stack replacement, the virtual machine would have to either:

  • not apply deferred compilation
  • save the program state when entering methods that has been partially compiled.
    • if a path that has not been compiled is taken, then the program state must be restored to the latest valid program state. The virtual machine must then continue using interpretation or compile the methods now including the rarely taken paths.

Debugging of optimized code[edit]

Optimized code is usually hard for a human to understand. Therefore, when debugging compiled programs, programs are usually compiled without optimizations. However, programs that have not been optimized may take too long to reach breakpoints. On-stack replacement allows a user to reach a breakpoint as fast as an optimized program and compile a version of the buggy method without optimizations in order to debug it.[3]

Adaptive optimization activation[edit]

Adaptive optimizations in the context of compilers are optimizations that are performed while the program is running. Usually, this is implemented by first creating an optimized version of a method and replace every reference to the unoptimized method with the reference to the optimized method.[4] However, if the methods referenced are the ones in the call stack, the work done needs to be discarded or committed before swapping the unoptimized method for the optimized one. The two options are:

  • finish the work being done by an unoptimized version of the method, which may take too long
  • discard the work done by the unoptimized version of the method and switch to the optimized version

On-stack replacement allows us to avoid this duplication of work and apply online optimizations as soon as they become available.

Optimization and code generation based on speculative program invariants[edit]

Deferred compilation is a special case of optimization and code generation based on speculative program invariants. The "speculative invariant" in deferred compilation is that the execution will never lead to a program path. There are many other speculative optimizations that when proven to be false, take advantage of OSR to recompile code without these false assumptions.

Techniques to implement OSR[edit]

Multiple entry points implementation[edit]

"To generate target code for an OSR transition transition, previous systems such as HotSpot would compile one version of the target method with two or more entry points, and use this version both for OSR transition and future method invocations"[1]

New method implementation[edit]

"Construct a new method, in bytecode, that sets up the new stack frame and continues execution,preserving semantics."[1] To do this a specialized prologue is prepended to the original method. The specialized prologue saves values into locals, loads values from the stack, and then jumps to the current program counter. [1]

References[edit]

  1. 1.0 1.1 1.2 1.3 1.4 Fink, S.J.; Feng Qian (2003). "Design, implementation and evaluation of adaptive recompilation with on-stack replacement". International Symposium on Code Generation and Optimization, 2003. CGO 2003. IEEE Comput. Soc: 241. doi:10.1109/cgo.2003.1191549. ISBN 076951913X.
  2. Chambers, Craig; Ungar, David; Chambers, Craig; Ungar, David (1991-11-01). "Making pure object-oriented languages practical, Making pure object-oriented languages practical". ACM SIGPLAN Notices. 26 (11): 1, 1–15, 15. doi:10.1145/117954.117955 (inactive 2018-07-19). ISSN 0362-1340.
  3. Hölzle, Urs; Ungar, David; Hölzle, Urs; Ungar, David (1994-10-01). "A third-generation SELF implementation: reconciling responsiveness with performance, A third-generation SELF implementation: reconciling responsiveness with performance". ACM SIGPLAN Notices. ACM, ACM. 29: 229, 229–243, 243. doi:10.1145/191081.191116 (inactive 2018-07-19). ISBN 0897916883.
  4. Steiner, Edwin; Krall, Andreas; Thalinger, Christian (2007-09-05). "Adaptive inlining and on-stack replacement in the CACAO virtual machine". ACM: 221–226. doi:10.1145/1294325.1294356. ISBN 9781595936721.


This article "On-stack replacement" is from Wikipedia. The list of its authors can be seen in its historical and/or the page Edithistory:On-stack replacement. Articles copied from Draft Namespace on Wikipedia could be seen on the Draft Namespace of Wikipedia and not main one.