scroll down

Agile Coretime: What's the 'Right' Price?

In this article adapted from a Grill blogpost, eskimor outlines a proposal aimed at finding "the right price" for coretime sales - a variant of the Dutch "descending price" auction model.

eskimor
Parachains Team Lead @ Parity Technologies
June 28, 2024
5 Min Read

This article has been reproduced and adapted from this post published on Grill, a revolutionary social platform that allowed bloggers and their followers to earn together. It is an application built by the Polkadot parachain, Subsocial.

Agile Coretime is specified in the Polkadot Fellowship's RFC-1, but only briefly explains an example pricing model. It specifically states that the described model should only illustrate that a solution exists, and should not be treated as a concrete proposal.

This blog post explains Agile Coretime as designed in RFC-1 and how we evolved the pricing model. Let's start with the requirements.

Requirements on the model

A very basic requirement of any auction is to be able to find the right price. For Agile Coretime we go with a variant of the Dutch auction for its simplicity and its ability to handle the sale of multiple items of the same goods at once. Given enough time and satisfactory price resolution, front-running should also be very limited.

Another requirement we have is that we are selling coretime and major consumers of it are heavy duty blockchains for which coretime is crucial and downtime is unacceptable. Those chains will be willing to pay a premium to get guaranteed continuous coretime: they ideally don't want any risk of getting front-run and outbid. What they also want is the ability to plan ahead and know for the upcoming year what they are going to pay for coretime.

The third and final requirement is that we want the market to be stable. In particular, price manipulation should be preventable.

Sales: Three-phase Structure

Coretime is organized around 28-day regions. In the first region, coretime is sold for the next 28-day region and so forth. So while the cores of the previous sale are served, the sale of the next region is ongoing. The sale over that 28-day period is organized in phases.

RFC-1 specifies a basic model for the price function, consisting of three phases:

  • The interlude phase
  • The lead-in period
  • A stable minimum price at the end

The most basic implementation would be linear and look something like this:

bodypic1_rob.png

Now let's look into these phases.

Interlude Phase (Renewals)

This phase is interesting and allows us to cater for heavy-duty blockchains requiring guaranteed coretime.

This is, because In this phase, only renewals are allowed. The regular sale has not yet started, so parachains eligible for renewal can not be outbid or front run in this phase.

So what exactly is a renewal? A renewal is similar to a purchase, except that it can happen already in the interlude phase (before the regular market starts) and that the price to be paid is predetermined and capped. In other words, you get:

  • Predictability and plannability on price
  • Guaranteed coretime

At the following conditions:

  • For a core to become eligible for renewal, it must have been assigned, via the assign extrinsic to a single parachain for the entire 28-days sale period with finality "Final".
  • The renewed core will automatically be assigned to the same para with finality "Final" again: It is not possible to transfer or sell renewal rights.
  • The price for a renewal will increase with each sale cycle. So while the maximum price is known in advance, it is expected that at least over time it will be higher than the price you would pay on the free market. 

One goal of Agile Coretime was to maximize resource utilization. Occupying a core fully, if most of your blocks are empty, is not a good use of the validator's time. The renewal system should only be attractive to chains which really need consistent block times every six seconds.

Hence, if you can afford some risk, you will move back to the open market at some point and maybe even consider secondary markets, where you can buy coretime regions that are scheduled to produce blocks every 12 seconds or even less frequently, if that fulfills your need equally. On top of that there is also the option of buying on-demand coretime.

Lead-in Phase

This phase is where the market is opened for everybody. We drop down from the maximum price of the interlude down to the minimum price (configured and adjusted over time). In the simplest form this is simply linearly descending over a period of time, perhaps two weeks. This is the Dutch auction part, when people can buy whatever cores remain available whenever the price appears low enough to them.

Stable Minimum

The last section is whatever is left of the 28-day period. Here the price just stays stable at the minimum and any final buyers can still buy any remaining cores at the basement price. Having a fairly lengthy overall region action cycle gives everyone that bought it plenty of time to split, interlace or result if they so desire.

Price Adjustments

Now with the general explanations out of the way, let's dive deeper into how things actually work and where things are becoming problematic.

  • How should the lead-in curve actually look?
  • Is linear a good fit? And if so, how big should the factor be? (That is, how much larger than the minimum price should the interlude price be?)
  • How do we determine and adjust the minimum price from period to period?

The original proposal of RFC-1 suggested a linear model, with the minimum price being the expected sale price (more or less). Performance of the market would be measured with how many cores we intended to sell versus how many we actually sold. If we sold all cores we would adjust the minimum price to the sell-out price + some additional percentage.

This model was adjusted, tuned and deployed on Kusama and it immediately proved to be problematic. Let's assume for example (similar to what actually happened on Kusama) there was only a single core for sale and that one was sold out right at the beginning of the lead-in period. This would have resulted in the next minimum price to be that high price + some percentage.

This caused conflicts, because on the one hand you would want a high lead-in factor to better capture the right price (otherwise you'd have to guess really well), but the higher the lead-in the higher the next minimum price might go - it would overcompensate. In the example above, a single person willing to pay 10x, would have driven the minimum price to at least 10x for the next sale, which now might have 10 cores or more for sale: price manipulation was being amplified! 

New Curve and Adjustments

We needed something better and we decided to change the broker pallet to no longer adjust based on cores sold, but rather on the achieved price directly. This ensured price manipulation attacks would no longer be possible  (Full details on this ticket, actual code here):

Let's start with the curve. First by changing how price adjustments work, we are now able to use really high lead-in factors: We are going with 100 for now. Furthermore, we define a target price, which is no longer the minimum, but in the middle of the lead-in. This means our target price is 10 times the minimum price and the interlude price is 10 times the target price (or 100 times the minimum price). We therefore allow the target price to be off in both directions by a factor of 10 and would still be able to capture the value correctly.

We now have a notably different shape to the chart, with a steeper initial drop until the target price is reached, before a shallower drop  until the stable minimum price is reached.

bodypic2.png

Now, how do price adjustments work? It's quite straightforward: we define the next target price as the sellout price of the previous sale. If the last sold core was sold at 10 DOT for example, we would configure the minimum price of the next sale to be 1 DOT and the maximum as 100 DOT..

Have we resolved our issue with price manipulation? Yes! Let's assume the minimum was 1 DOT, the maximum therefore 100 DOT. Someone bought that one core that was for sale for that 100 DOT. In the next sale period the minimum price would still only be 10 DOT, which was the target price of the previous sale. We therefore achieved two things here:

The worst thing that can happen from one sale to the next is that the new minimum price is the old target price. Meaning, even in that extreme scenario, people get cores for the price that was seen as the "market price" anyway, just one sale earlier. There is no amplification anymore. That 100 DOT sale only brought up the minimum price to 10 DOT. That is still 10 times cheaper than the investment.

Refining the Renewal Price

To make the model work even for renewals, I had to make adjustments to what we consider the sellout price. In RFC-1 renewals would not affect the sellout price. This is now no longer the case - in fact, the renewals affect the price just as normal purchases.

With this, even a market only consisting of renewals still has a market price, ensuring the open market stays aligned with renewals. On the flip side, if a renewal is made in the lead-in, it is competing with the regular market and could be outbid anytime. Therefore, we also record the price that was paid for the renewal as the sellout price (if it was the last purchase).

For a normal market where renewals are done in the interlude and there are still some cores left for sale, taking renewals into account will have no effect as the sellout price used is the last price that was achieved.

Further (future) Alternatives

The new curve is already hinting towards a direction that has been suggested as an alternative by a few people: instead of having a linear curve or a linear curve with a dent in the middle, we could go all the way and create an exponential lead-in curve. An advantage of an exponential lead-in would be that price adjustments of the minimum price would likely not be needed at all, except if the price ever moved into the very steep part of the curve - then we might run into problems with price changes being too high from block to block.

Such a drift, if a problem at all, would only happen very slowly though and thus could easily be addressed via governance. I would therefore consider a full exponential curve a valid alternative solution. It should definitely be reconsidered if the solution proposed here is still showing any problems in practice.