plutus-core-1.36.0.0: Language library for Plutus Core
Safe HaskellSafe-Inferred
LanguageHaskell2010

UntypedPlutusCore.Transform.FloatDelay

Description

The Float Delay optimization floats Delay from arguments into function bodies, if possible. It turns (n -> ...Force n...Force n...) (Delay arg) into (n -> ...Force (Delay n)...Force (Delay n)...) arg.

The above transformation is performed if:

  • All occurrences of arg are under Force.
  • arg is essentially work-free.

This achieves a similar effect to Plutonomy's "Split Delay" optimization. The difference is that Split Delay simply splits the Delay argument into multiple arguments, turning the above example into (m -> (n -> ...Force n...Force n...) (Delay m)) arg, and then relies on other optimizations to simplify it further. Specifically, once the inliner inlines Delay m, it will be identical to the result of Float Delay.

The advantages of Float Delay are:

  • It doesn't rely on the inliner. In this example, Split Delay relies on the inliner to inline Delay m, but there's no guarantee that the inliner will do so, because inlining it may increase the program size.

We can potentially modify the inliner such that it is aware of Float Delay and Force-Delay Cancel, and makes inlining decisions with these other optimizations in mind. The problem is that, not only does this makes the inlining heuristics much more complex, but it could easily lead to code duplication. Other optimizations often need to do some calculation in order to make certain optimization decisions (e.g., in this case, we want to check whether all occurrences of arg are under Force), and if we rely on the inliner to inline the Delay, then the same check would need to be performed by the inliner.

  • Because Force Delay requires that all occurrences of arg are under Force, it guarantees to not increase the size or the cost of the program. This is not the case with Split Delay: in this example, if the occurrences of n are not under Force, then Split Delay may increase the size of the program, regardless of whether or not Delay m is inlined. If Delay m is not inlined, then it will also increase the cost of the program, due to the additional application.

The alternative approach that always floats the Delay regardless of whether or not all occurences of arg are under Force was implemented and tested, and it is strictly worse than Float Delay on our current test suite (specifically, Split Delay causes one test case to have a slightly bigger program, and everything else is equal).

Why is this optimization performed on UPLC, not PIR?

  1. Not only are the types and let-bindings in PIR not useful for this optimization, they can also get in the way. For example, we cannot transform let f = /a. ...a... in ...{f t1}...{f t2}... into ket f = ...a... in ...f...f....
  2. This optimization mainly interacts with ForceDelayCancel and the inliner, and both are part of the UPLC simplifier.

Documentation

floatDelay ∷ (MonadQuote m, Rename (Term name uni fun a), HasUnique name TermUnique) ⇒ Term name uni fun a → m (Term name uni fun a) Source #