Cost accounting - storage issue

Cost accounting suffers yet another non determinism problem. Now it is about the storage cost.

Given produce and consume on the same channel:

for(y <- @x) { Nil } | @x!(1)

deployed together, storage cost is non deterministic because we refund if the production didn't "stick" in the tuplespace. It means that if produce finds a matching consume I will be first charged for storing the produce but then I will get refunded (and similarly for consume/produce scenario).

If there is a COMM event within the deploy and all the productions are linear (persist=false) then we have two issues:
1) final cost is non-deterministic (because of non-deterministic storage cost)
2) we shouldn't charge at all if the deploy doesn't leave a trace in the tuplespace.


Kyle Butt 's proposal is to refund for clearing the tuplespace. If my produce / consume removes data from the tuplespace we will refund proportionally to the cost of putting it there in the first place.

Michael Birch (Unlicensed) raised an issue on possible arbitrage. Apparently there is such problem with Ethereum, but Kyle's is convinced that our solution is temporarily arbitrage and the user can gain and loose equally.


Comment from Mateusz Gorski (Unlicensed):
Kyle's proposal can't be implemented in the reducer because it doesn't have a knowledge whether the matched data/continuation is a linear one. The algorithm is as follows ( input is either produce or consume )

  1. charge proportionally to the byte size of the input
  2. Execute the input
  3. If input is linear and match is found we refund for (1)
  4. We should also refund for clearing the tuplespace

Problem with this proposal is that we don't know if we have cleared the tuplespace or not, because we don't know whether matched production was linear or not.

In my opinion there is no way of telling definitely (and correctly) whether we have cleared the tuplespace from the POV of the reducer. Even if we track all the produces and consumes and COMM events that happened we still don't know if we are clearing the tuplespace or not. Example:

// Deploy #1: 
@x!!(10)
// Deploy #2:
for(x <- @x) { Nil } | @x!(10)

In the Deploy #2 we don't know whether the 10 on the @x channel is coming from our linear production or from persistent one from the Deploy #1

My claim is that the only way to do this correctly is to ask RSpace for the bytes delta between deploys and charge/refund. It would work as follows (pseudocode):

1. a := space.getHotStoreSize()
2. (phloLeft, errors) = evalDeploy(deploy)
3. b := space.getHotStoreSize()
4. c := b - a
5. if(c > phloLeft) costAcc.refund(c) else costAcc.charge(c) // if we use more than available phlos at this point, we fail the deploy with OutOfPhloError
6. space.checkpoint() // persist hot store to LMDB



Michael Stay (Unlicensed)'s comment:

There are three costs that I see. One is the cost of reading the initial program. One is the cost of computation, e.g. comm events, matching, and expression reduction. The last is the cost of storage. The cost of storage should be proportional to the change in the size of the tuplespace (or zero if we're not refunding anything).



Another solution is to return also a persistent flag together with the matching data/continuation. This is enough to decide whether we should refund for clearing the tuplespace. It is also faster than checking the size of the hot store.