Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: it's -> its unless abbreviating it is

Summary

  • Reason for writing: The resources (CPU cycles, memory, storage, network access) available to a Rholang contract must be limited to those purchased by the message sender who triggers the contract. Since Rholang is Turing complete, a resource bound cannot be statically determined. Thus, we require means to bound resource consumption at runtime.

  • Problem: In the following, the necessary definitions are given and implementation options explored. Essentially, I make the argument that a fair bound for Rholang requires a well-defined notion of state, of operations that transform the state, and a means to quantify resource consumption in a concurrent setting.
  • Conclusion: A resource bound enforced at the Rosette VM level is trivial and granular w.r.t Rosette. A bound enforced at the Rholang source level is infeasible and imprecise. A resource bound for the Rholang interpreter is feasible and granular.

  • Implications:

    1. Considering the triviality of implementation, a bound at the Rosette VM level should be implemented and is in progress.
    2. A bound for the Rholang Interpreter is not only feasible but superior to one for Rosette. Implementation is in progress.

...

We need a resource bound to ensure that the program doesn't exceed a certain complexity.

To do this, define a potential function Φ : Σ → Q+ that maps each program state to some non-negative rational number. Φ represents the resources available for consumption in a given state. If it can be shown that  P → Ω(α) P'  ∧  Φ(P)  ≥  Ω(α)  ⇒  Φ(P') = Φ(P) - Ω(α)  is true for all program states, then Φ(P) is a valid resource bound. Clearly, the validity of Φ depends on the definition of T.

Implementing a resource bound for sequential programs is trivial. It basically amounts to maintaining a counter to represent consumable resources. The initial value of the counter equals the initial resources allocated to the program. With every operation, the counter is decremented by the cost of the operation.

For concurrent programs, matters are more complicated. For rholang in particular, two processes can transition independently (represented P | α P' | Q). This implies the existence of multiple resource pools, one for P and one for Q. If transitions independently of Q, then Φ(P) must be checked independently of Φ(Q)else the transitions are linearized around a single resource pool.

In addition, resource allocation must be fair. For example, the transition P | α NilQ describes a program state that transitions to a final state Nil. If the resources belonging to Nil are greater than zero, Φ(Nil) ≥ 0, and the resources of Q are less than the cost of a transition, Φ(Q) < Ω(α) for some transition α Q', then Φ(Nil) should be relinquished to to enable the transition.

Basically, we have to answer three principle questions:

...

A VM-level resource bound is more precise than one at Rholang source level, but it is dependent on Rosette VM.

Rosette VM is a labeled transition system T = Σ × Op × → , where P ∈ Σ is the set of possible VM states, op ∈ Op is the set of Rosette VM opcodes and primitives, and → ⊆ Σ × Op × Σ is the transition relation. Then, a cost model for Rosette is just a function Ω : Op → Q+ that assigns a cost to each opcode and primitive.

Rosette VM will not support parallelism in the Mercury release. Eliminating parallelism eliminates the fairness problem outlined above. In fact, since opcode execution is linear, it suffices to maintain a single gas counter as a component of VMState, without affecting the execution semantics. To account for the impact of input arguments on the complexity of an opcode, we simply make the cost of an opcode parametric in it's its input arguments, i.e., Ω : Op → Arg → Q+This was the same approach taken for the cost model of the Ethereum Virtual Machine.

Pros:

...

produce: Store → Term → Unit

Given a store and an environment, E , the cost of the transition Queue[(E, for(@Nil ← x){ P } | Zero ) :: R] → Queue[R] can be (roughly) calculated Ω(read + consume + write + i) + Ω(instr(P)). The first function quantifies the complexity of a single bind. It uses i to account for the overhead of pattern matching and parallelization, though these could potentially be exported as "instructions" too. The second function represents the cost of the instructions that represent P. Naturally, Zero has no cost.

With a notion of state, and a cost model, the resource bound can be implemented on top of the threading model. To start, the main thread is given a portion of the gas implicitly purchased by the message sender. For every fork, thread resources are divided. For every join, thread resources are added. For every instruction executed on a thread, it's its resources are decremented. If a thread's resources are entirely depleted, it must block and query it's its parent for more. If a thread terminates with resources remaining, it relinquishes those to it's its parent. This continues either until the program terminates or the main thread's resources are depleted.

Pros:

...