Symphonious

Living in a state of accord.

Why Miners Can Be Simultaneously Paid Too Much and Struggling to Survive

Note: This post is deliberately high level. It doesn’t attempt to be an economic proof or go into the gory details of exactly how the difficulty adjustment works. The aim is just to see the high level effects of how the system pans out without getting lost in the nitty gritty details of what is ultimately a very complex system.

Pretty much every time the word “miner” is mentioned in an Ethereum discussion someone will claim that miners are paid too much and miners will respond saying they’re struggling to survive. Turns out, both can be simultaneously true and in fact it’s pretty much the expected case.

The reason lies ultimately in the way the difficulty automatically adjusts. Without going into too much detail, Ethereum maintains a (very) roughly consistent block time by making it easier or harder to find the next block depending on whether the latest block was found too quickly or too slowly.

The other side of that equation is the total hash power. When there is a lot of hash power being used to find the next block, it will generally be found faster and when there’s less it will be found slower. Net result, as there’s more hash power, it gets harder to find the next block.

Why Miners Are Always Struggling

So even when a miner has a consistent amount of mining power, when the total hash power increases the rate that they find blocks (ie the rate they get paid) will reduce. Similarly if the total hash power decreases, the rate they find blocks and get paid will increase. When mining is profitable, more people will start mining to get a share of those sweet, sweet block rewards. That increases the total hash power and each individual miner winds up earning less.

That process will continue until there’s about the same amount of hash power being added by people investing in mining as there is hash power being turned off because it’s just not worth it. That balance appears to be just a little bit above break even, so individual miners will always wind up only just making a profit.

Why Miners Are Paid Too Much

Since the difficulty adjustments are aiming to have, on average, blocks a consistent distance apart each day about the same number of blocks are created regardless of how much hash power is thrown at the problem. So roughly the same amount of new ETH is created and paid out as block rewards each day. While this ETH is created out of nowhere it is effectively paid for by all ETH holders because increased issuance puts downward pressure on prices.

There’s only so much hash power required to keep the Ethereum chain secure. People will argue about how much that is but the exact number isn’t important here. So if the total hash power is more than that threshold we could in theory reduce the total amount paid to miners. That wound result in miners earning less both individually and in aggregate which will make some of them unprofitable and they’ll stop mining. That will then reduce the hash rate (which is what we wanted because we didn’t need that much) and each individual miner will become more profitable again. Once the equilibrium is found again there’ll be less paid out in total to miners but each individual miner will wind up earning about as much as they did before the change.  The opposite process can be used if more hash power is required – increasing the block reward would temporarily make individual miners more profitable, but that would incentivise the addition of more hash power until miners are individually about as profitable as they were before.

Transaction fees and the price of ETH in various fiat currencies are other variables that affect how much miners are paid. They can add a lot of variability, but the process is still essentially the same.  Having higher transaction fees or a higher price of ETH is just like increasing the block reward, just much less controllable or predictable.

Since it’s hard to determine exactly how much hash power is required to secure the chain, and it’s better to err on the side of more hash power, the typical case is that miners will be overpaid.  We could theoretically pay less and still have a secure chain but the variability in the price of ETH and uncertainty in exactly how much hash power is enough means reducing the total payments to miners is too difficult and we just accept some amount of over payment.

Resolving The Oxymoron

Hopefully you’ve seen the key distinction here that makes it logical that miners are both paid too much and are struggling to survive.  They’re paid too much in aggregate and struggling to survive individually.

Miners quite understandably are very focussed on their individual profitability, but as we’ve seen it doesn’t matter how much is paid in total to miners, the hash rate will just adjust back to the equilibrium where they’re struggling to survive again. To change that we’d have to continuously increase block rewards, trying to stay ahead of the new hash power that would attract. But even if we could stay ahead, the massive inflation would destroy the price of ETH so the cost of electricity and mining equipment relative to ETH would spike and miners may well end up worse off overall.

ETH holders on the other hand are concerned about the amount paid to miners in aggregate and whether that’s buying us too little, the right amount or too much hash power. That doesn’t mean they don’t care about individual miners, just that as explained above, paying more in total won’t make individual miners more profitable in the long run anyway.

The Complication of Lag and Averages

In all this, there’s a lot of mentions of “on average” or “eventually”. That’s because while there are clear economic functions at play, there is also a lot of probability and lag in finding the new equilibrium. It takes time for people to decide to invest in mining and to order the components required, so if the block reward doubled tomorrow, the hash rate wouldn’t suddenly double.  For a while miners would in fact make a lot more money but it would gradually reduce as the new hash power came online until the equilibrium is found. Similarly, if the block reward is halved the hash power doesn’t suddenly drop – miners wind up earning less for a while until the equilibrium is found again.

Throw in the variability from changing ETH prices and transaction fees and it can take even longer to find that equilibrium because the uncertainty causes people to put off new investment to see if high prices last or continue mining while unprofitable hoping the price or transaction fees will come back up.

While that lag does affect some details and needs to be factored into various decisions, it doesn’t change the fundamental economics. The equilibrium will wind up being found, even if it takes a long time and miners will be back to that average amount of individual income – likely just managing to be profitable.

 

Hard Truths for ETH Stakers

If you’re getting into staking ETH on the beacon chain, it’s important to know what you’re actually signing up for. There’s lots of articles about how easy it is and how awesome, intelligent and good looking stakers are, so here’s a few hard truths you should make sure you’re ok with…

Short Term Stuff

Most people know this stuff so I’m just going to skim over it.

No Withdrawals Yet

You can’t withdraw staked ETH or rewards yet. There isn’t a fixed timeline on when you will be able to.

Penalties

Rewards on the beacon chain are not automatic – you actually have to run a node and perform the assigned duties.  If you fail to perform the duties, instead of a reward you’ll actually get a small penalty.

If lots of validators aren’t performing their duties well at the same time and the chain stops finalising, those small rewards will gradually get bigger and bigger.

Slashings

If your validator signs something that’s self-conflicting (ie not just wrong but disagreeing with what you’ve previously said) it can be slashed. This is a bigger penalty and your validator is immediately deactivated so you can’t earn any more rewards (and still can’t withdraw). How big depends on how many other validators are slashed at the same time – it can be anything from 0.5ETH to your entire stake).

Rewards  Automatically Reduce as Validators Activate

As more validators begin staking, the rewards paid are automatically reduced. So while you might have earned 0.00004 ETH for a correct attestation last week, this week it may be only 0.00003 ETH, and next week it may be even less.

Which brings us to the first key principal you should be ok with…

Key Principals

Pay the Minimum Amount Possible to Secure the Chain

The philosophy of Ethereum is to pay the minimum amount possible to ensure the chain remains secure.  This applies equally to miners in Eth1 and stakers in Eth2. That’s why rewards automatically reduce when there are more active validators – the chain has enough validators to be secure so it doesn’t need to provide as much incentive for staking. But it goes well beyond that.

The algorithm for determining those rewards may itself be changed to further reduce them if in the future it’s deemed that security would still be sufficient. There is absolutely no guarantee of any rate of reward.

In fact, there is already one proposal that could reduce rewards which was considered for the first protocol upgrade but ultimately seen as unnecessary. Specifically the proposal was to cap the number of validators that can actually be active at any one time, with the rest being dormant (so not earning any rewards) and randomly cycle which validators are dormant. It was seen as unnecessary because even if we continued activating validators at the maximum possible rate, it wouldn’t hit the cap and activate these changes for at least another year. At some point though it’s likely to be implemented as it provides important reduction in the load required for nodes following the chain.

Which brings us to…

Ethereum Puts Itself First

Whenever there is a trade off being individuals or groups and the safety, security and reliability of the Ethereum chain, the Ethereum protocol and Ethereum developers will always prioritise the chain first. After all, if the chain isn’t working no one can benefit from it.

Some examples of this in the beacon chain are the long delays before deposits activate – the chain is ensuring the validator set doesn’t change too fast because that’s a security risk. The fact that you have to wait 2+ weeks for your validator to activate is just unfortunate for you.

Note though, that this isn’t intentionally capricious. If the chain would function just as well either way, then sure, make the protocol as nice to people in any role as possible. This also ties into paying the minimum amount for security – if we can make the staking experience better in some way (that doesn’t reduce security) then people will likely be prepared to stake for less money and rewards can be reduced.

Ethereum Does Not Exist To Make Stakers Rich

This is really just an extension of the previous two, but it’s worth being extra clear. Staking is fun for the whole family and everything but it really is just a means to an end – providing security for the Ethereum chain. You can lose sight of that because at the moment the beacon chain’s only function is staking but that’s just a temporary thing. Ultimately Eth1 will merge in and all that staking will just be to provide security for contracts and transactions on the chain.

Remember, stakers are service providers to the network and they’re paid for their services with rewards.

Different people have different views on exactly what the point of Ethereum is (world computer, digital asset, money, all of the above?) but it’s definitely not to make stakers rich. Don’t expect any sympathy if it winds up being hard to make a profit – unless security is at risk, rewards won’t be increased (“Pay the Minimum Amount Possible to Secure the Chain” remember?).

Change Is Inevitable

Ethereum is a blockchain that has always and will continue to change and upgrade. There are an unusually large number of big changes coming as the collection of technologies referred to as Eth2 roll out, but it won’t ever stop. There will always be new ideas to take advantage of and new problems to solve.

Those changes will be (or at least are intended to be) great for Ethereum overall, but they might not be for any particular person or group. They might mean stakers need more resources to keep up (e.g. the merge requiring running a full Eth1 node), they might affect reward rates or any number of other things. Just because something is true today, doesn’t mean it always will be.

The Good News

That all sounds pretty doom and gloom, but the good news is that staking is entirely optional. There are plenty of other places you can put your money if you don’t like the deal that staking is offering.

Most importantly, once withdrawals are possible, it’s also really easy to stop staking and get your stake back (assuming you didn’t go getting yourself slashed badly…). It’s like if you could buy a mining rig with a guarantee that it could be sold for the original price when you’re done with it and you get to keep whatever profit or loss you’ve earned.

 

PS: I’ve been informed that staking also doesn’t guarantee to make you awesome, intelligent or good looking.

Exploring Eth2: Attestation Rewards and Validator Performance

Through the beacon chain testnets people have been gradually developing better ways to evaluate how well their validators are working. Jim McDonald made a big leap forward defining attestation effectiveness which focuses on the inclusion distance for attestations. That revealed a bunch of opportunities for clients to improve the attestation inclusion distance and attestation effectiveness scores improved quite dramatically until now scores are generally at or close to 100% pretty consistently.

But then beaconcha.in added a view of how much was earned for each attestation. Suddenly people were left wondering why some of their “rewards” were negative even though the attestation was included quickly. It became obvious that inclusion distance was only one part of attestation rewards.  

How Attestation Rewards Work

There are four parts in total:

  • Reward based on inclusion distance
  • Reward or penalty for getting the source correct
  • Reward or penalty for getting the target correct
  • Reward or penalty for getting the head correct

It turns out that inclusion distance is the smaller of these components.  Ben Edgington explains all the detail very well in his annotated spec. First we calculate the base reward, which is the component which factors in the validator’s effective balance (lower rewards if your effective balance is less than the maximum 32ETH) and the total staked ETH (reducing rewards as the number of validators increase).

Then for the source, target and head attestations, a reward or penalty is applied depending on whether the attestation gets them right.  A missed attestation is penalised for all three. The penalty is 1 * base reward. The reward however factors in a protection against discouragement attacks so is actually base reward * percentage of eth that attested correctly. This provides incentive for your validator to do whatever it can, like relaying gossip well, to help everyone stay in sync well.

Finally the inclusion distance reward is added. There’s no penalty associated with this, missed attestations just don’t get the reward. 1/8th of this reward is actually given to the block proposer as reward for including the attestation, so the maximum available reward for the attester is 7/8th of the base reward. So it winds up being (7/8 * base reward) / inclusion distance.

So inclusion distance is actually the smallest component of the reward – not only is it only 7/8th of base reward, there’s no penalty attached. The most important is actually getting source right as attestations with incorrect source can’t be included at all, resulting in a 3*base reward penalty. Fortunately getting source right is also the easiest because it’s back at the justified checkpoint so quite hard to get wrong.

Evaluating Validators

If we’re trying to evaluate how well our validator is doing, these calculations add quite a lot of noise that makes it difficult. Especially if we want to know if our validator is doing better this week than it was last week.  If you just look at the change in validator balance, how many block proposals you were randomly assigned dominates and even just looking at rewards for attestations is affected by the number of validators and how well other people are doing.

I’d propose that we really want to measure what percentage of available awards were earned. That gives us a nice simple percentage and is highly comparable across different validators and time periods.

The first step is to think in terms of the number of base rewards earned which eliminates variance from the validator’s balance and total eth staked.  Then we can ignore the discouragement attack factor and say that each of source, target and head results in either plus or minus 1 base reward. Inclusion distance is up to 7/8th of a base reward, scaling with distance like normal. Like for attestation effectiveness, we’d ideally also use the optimal inclusion distance – only counting the distance from the first block after the attestation slot to avoid penalising the validator for empty slots they couldn’t control. In practice this doesn’t make a big difference on MainNet as there aren’t too many missed slots.

So each attestation duty can wind up scoring between -3 and +3.875 (3 and 7/8ths). For any missed attestation, the score is -3. For any included attestation, we calculate the 4 components of the score with:

Inclusion distance score: (0.875 / optimal_inclusion_distance)

Source score: 1 (must be correct)

Target and head scores: 1 if correct, -1 if incorrect

And to get a combined score, we need to add them together.

We can actually do this with the metrics available in Teku today:

(
validator_performance_correct_head_block_count - (validator_performance_expected_attestations - validator_performance_correct_head_block_count) +

validator_performance_correct_target_count - (validator_performance_expected_attestations - validator_performance_correct_target_count) +

validator_performance_included_attestations - (validator_performance_expected_attestations - validator_performance_included_attestations) +

(0.875 * validator_performance_included_attestations / validator_performance_inclusion_distance_average)
) / validator_performance_expected_attestations / 3.875

Which is essentially :

(
(correct_head_count - incorrect_head_count +
correct_target_count - incorrect_target_count +
included_attestation_count - missed_attestation_count
) / expected_attestation_count +
0.875 / inclusion_distance_average
) / 3.875

To give a score between 0 and 1 pretty closely approximating the percentage of possible attestation rewards that were earned. With the discouragement attack factor ignored, we are slightly overvaluing the source, target and head rewards but it’s very minimal. On MainNet currently they should be +0.99 when correct instead of +1 so doesn’t seem worth worrying about.

You can also calculate this fairly well for any validator using a chaind database with a very long and slow SQL query.

Evaluating the Results

Looking at the numbers for a bunch of validators on MainNet, generally validators are scoring in the high 90% range with scores over 99% being common but over anything more than a day it’s hard to find a validator with 100% which generally matches what we’d expect given the high participation rates of MainNet but knowing that some blocks do turn up late which will likely result in at least the head being wrong.

One key piece of randomness that’s still creeping into these scores though are that its significantly more likely to get the target wrong if you’re attesting to the first slot of the epoch – because then target and head are the same and those first blocks are tending to come quite late.  There are spec changes coming which should solve this but it is still affecting results at the moment.

What About Block Rewards?

For now I’m just looking at what percentage of blocks are successfully proposed when scheduled (should be 100%).  There are a lot of factors that affect the maximum reward available for blocks – while blocks aren’t reaching the limit on number of attestations that can be included, I’d expect that all the variation in rewards comes down to luck.  Definitely an area that could use some further research though.

Exploring Eth2: Attestation Inclusion Rates with chaind

For the beacon chain MainNet, I’ve setup an instance of chaind – a handle little utility that connects to a beacon node (like Teku) and dumps out the chain data to a database so its easy to run various queries against. While you can do this with various REST API queries and custom code, having it all accessible by SQL makes ad-hoc queries a lot easier and lets you add it as a datasource for Grafana to display lots of powerful metrics.

Herewith, a few useful queries and snippets that make life easier. I’ve setup my dashboard with two variables – genesisTime (1606824023 for MainNet) and perfValidators – a list of validator indices that I often want to monitor (e.g. 1,1234,5533,2233).

Filter an epoch based table to the current Grafana time range:

WHERE f_epoch >= ($__unixEpochFrom() - $genesisTime) / 12 / 32 AND f_epoch <= ($__unixEpochTo() - $genesisTime) / 12 / 32

Filter a slot based table to the current Grafana time range:

WHERE f_slot >= ($__unixEpochFrom() - $genesisTime) / 12 AND f_slot <= ($__unixEpochTo() - $genesisTime) / 12

Average balance, in ETH, of a set of validators for the latest epoch:

SELECT
f_epoch * 32 * 12 + $genesisTime AS "time",
AVG(f_balance) / 1000000000.0 as "balance"
FROM t_validator_balances
WHERE
f_epoch >= ($__unixEpochFrom() - $genesisTime) / 12 / 32 AND f_epoch <= ($__unixEpochTo() - $genesisTime) / 12 / 32 AND
f_validator_index IN ($perfValidators)
GROUP BY f_epoch
ORDER BY f_epoch DESC
LIMIT 1

Balances by validator, by epoch suitable for graphing:

SELECT
CONCAT(f_validator_index, ' ') AS metric,
f_epoch * 32 * 12 + $genesisTime AS "time",
MAX(f_balance) / 1000000000.0 as "balance"
FROM t_validator_balances
WHERE
f_epoch >= ($__unixEpochFrom() - $genesisTime) / 12 / 32 AND f_epoch <= ($__unixEpochTo() - $genesisTime) / 12 / 32 AND
f_validator_index IN ($perfValidators)
GROUP BY f_epoch, f_validator_index
ORDER BY f_epoch

Calculate the percentage of blocks successfully produced and included on the canonical chain:

SELECT
SUM(1) FILTER (WHERE b.f_root IS NOT NULL) / COUNT(*) as "value"
FROM t_proposer_duties AS d
LEFT JOIN t_blocks AS b ON d.f_slot = d.f_slot
WHERE
d.f_slot >= ($__unixEpochFrom() - $genesisTime) / 12 AND d.f_slot <= ($__unixEpochTo() - $genesisTime) / 12 AND
d.f_validator_index IN ($perfValidators)

And then we come to attestations… They’re a special kind of fun because they can be aggregated and the validator is identified by its position in a committee rather than by its validator index directly. We’re going to build up to having a query that calculates the inclusion rate for attestations but we’ll take it a step at a time with a bunch of useful queries and snippets along the way.

Firstly, we need to deal with the fact that attestations for the committee may have been spread across multiple aggregates when included in blocks and recombine them into one set of aggregation bits that represents every validator from that slot and committee which was included in any attestation:

SELECT x.f_slot, x.f_committee_index, bit_or(RIGHT(x.f_aggregation_bits::text, -1)::varbit) AS f_aggregation_bits 
FROM t_attestations x
GROUP BY x.f_slot, x.f_committee_index

Note: you may want to add a WHERE clause to constrain this to the relevant time rage to improve performance.

The aggregation bits are stored as a bytea, so we first have to convert it to a varbit with a weird RIGHT(x.f_aggregation_bits::text, -1)::varbit because the internet said to and then we can do a bitwise or to combine them all.

Now we have slot, committee index and a long binary string indicating which validators in the committee attested. So we can identify the validators we’re interested in, we want to convert that binary string to an array of validator indices. I don’t know of a function that can do that in Postgres, but we can write one:

create or replace function get_indices(varbit, bigint[]) returns bigint[] as $$
declare i int;
l int;
s int;
c bigint[];
begin
c:='{}';
l:=0;
loop
exit when get_bit($1, bit_length($1) - 8 + l)=1;
l:=l+1;
end loop;
s:=(7 - l) + (octet_length($1) - 1) * 8;
c:=c||l::bigint||s::bigint;
for i in reverse s-1..0 loop
if get_bit($1, (i / 8) * 8 + 7-(i%8))=1 then
c:=c||$2[i + 1];
end if;
end loop;
return c;
end;
$$ language plpgsql;

UPDATE: The initial version of this function was incorrect. Endianness strikes again… The updated version fully parses the SSZ bitlist format, using the marker bit to calculate the exact list size.

This takes the binary string we had as the first argument (varbit) and the f_committee field from the t_beacon_committees table which is an array of validator indices. It iterates through each bit in the binary string and if it’s set adds the validator index from the same position in the beacon committee to the result.

Combining the two, we can get the list of validator indices that were included by slot and committee index:

SELECT c.f_slot, c.f_index, get_indices(a.f_aggregation_bits, c.f_committee)
FROM t_beacon_committees c
JOIN (SELECT x.f_slot, x.f_committee_index, bit_or(RIGHT(x.f_aggregation_bits::text, -1)::varbit) AS f_aggregation_bits
FROM t_attestations x
GROUP BY x.f_slot, x.f_committee_index) a ON a.f_slot = c.f_slot AND a.f_committee_index = c.f_index

If we just want to know how many validators attested from that committee and slot, we’d change the get_indices call to:

cardinality(get_indices(a.f_aggregation_bits, c.f_committee))

But we really only want to know how many of our validators attested so we need to find the intersection of our validators and the validators that attested.  That part of the select then becomes:

icount(get_indices(a.f_aggregation_bits, c.f_committee)::int[] & '{$perfValidators}'::int[])

Since we’re ultimately wanting just the total number of attestations included from our validators we can aggregate all those rows with the SUM function and our query becomes:

SELECT SUM(icount(get_indices(a.f_aggregation_bits, c.f_committee)::int[] & '{$perfValidators}'::int[]))
FROM t_beacon_committees c
JOIN (SELECT x.f_slot, x.f_committee_index, bit_or(RIGHT(x.f_aggregation_bits::text, -1)::varbit) AS f_aggregation_bits
FROM t_attestations x
GROUP BY x.f_slot, x.f_committee_index) a ON a.f_slot = c.f_slot AND a.f_committee_index = c.f_index

But that’s only have the story (fortunately its the more complicated half) – we also want to know how many attestations we should have produced. That means counting the number of times our validators appear in committees:

SELECT SUM(icount(c.f_committee::int[] & '{$perfValidators}'))
FROM t_beacon_committees c

It’s tempting to just use a COUNT(*) but that will find the number of committees any validator is in, but undercount the number of expected attestations if two of our validators are in the same committee. So we have to apply the same trick as we did for counting the number of attestations – find the intersection and then count.

We can filter the number of rows we have to inspect by only including committees containing our validators with:

WHERE c.f_committee && '{$perfValidators}'

So putting it all together we can get the percentage of included attestations in the current time range with:

SELECT SUM(icount(get_indices(a.f_aggregation_bits, c.f_committee)::int[] & '{$perfValidators}'::int[]))::double precision / SUM(icount(c.f_committee::int[] & '{$perfValidators}'))
FROM t_beacon_committees c
JOIN (SELECT x.f_slot, x.f_committee_index, bit_or(RIGHT(x.f_aggregation_bits::text, -1)::varbit) AS f_aggregation_bits
FROM t_attestations x
GROUP BY x.f_slot, x.f_committee_index) a ON a.f_slot = c.f_slot AND a.f_committee_index = c.f_index
WHERE c.f_slot >= ($__unixEpochFrom() - $genesisTime) / 12 AND c.f_slot <= ($__unixEpochTo() - $genesisTime) / 12
AND c.f_committee && '{$perfValidators}';

Which is up there with the least comprehensible SQL I’ve ever written.  But it works and it says Teku is awesome.

But why stop there? What if we wanted to know the number of correct attestations?

Remember way back up when we were first aggregating attestations? If we just filtered the attestations to the ones which have the right beacon block root, our aggregate will only count validators that attested to the correct head. A straight join to t_blocks will do the trick:

SELECT SUM(icount(get_indices(a.f_aggregation_bits, c.f_committee)::int[] & '{$perfValidators}'::int[]))::double precision / SUM(icount(c.f_committee::int[] & '{$perfValidators}'))
FROM t_beacon_committees c
JOIN (SELECT x.f_slot, x.f_committee_index, bit_or(RIGHT(x.f_aggregation_bits::text, -1)::varbit) AS f_aggregation_bits
FROM t_attestations x
JOIN t_blocks b ON x.f_beacon_block_root = b.f_root AND b.f_slot = (SELECT MAX(b2.f_slot) FROM t_blocks b2 WHERE b2.f_slot <= x.f_slot)
WHERE x.f_slot >= ($__unixEpochFrom() - $genesisTime) / 12 AND x.f_slot <= ($__unixEpochTo() - $genesisTime) / 12
GROUP BY x.f_slot, x.f_committee_index) a ON a.f_slot = c.f_slot AND a.f_committee_index = c.f_index
WHERE c.f_slot >= ($__unixEpochFrom() - $genesisTime) / 12 AND c.f_slot <= ($__unixEpochTo() - $genesisTime) / 12
AND c.f_committee && '{$perfValidators}';

It’s tempting to just test that the block root and slot matches, but then attestations that correctly pointed to empty slots wouldn’t be included. So we have to check that the block root is the last one at or before the attestation slot.

Or would could change that join to check if the target root (it points to the block root at the first slot of the epoch):

JOIN t_blocks b ON x.f_target_root = b.f_root AND b.f_slot = (SELECT MAX(b2.f_slot) FROM t_blocks b2 WHERE b2.f_slot <= x.f_target_epoch * 32)

And hey look, we made that incomprehensible SQL even worse!

This may not be the best use of chaind but it’s fun and the snippets of SQL here can be remixed and combined to do quite a lot of things as part of what’s investigating what happened with a chain. I suspect it would be a lot easier to have a denormalised table that recorded the first time a validator’s attestation was included to save all the bit twiddling and array intersections. It may come at a fairly large cost of disk space though.