Consumer Tokenomics

In this post I’d like to cover some of the things that we're thinking about at Bunches as we're designing a fungible token for onchain consumer utility.

To be clear, we are not designing a token as a financial asset for onchain consumers, and so a lot of the thoughts that are here shouldn't be applied if that's your use case.

Our use cases involve sports fandom and reputation, both of which are what I'd refer to as "soft assets" (as opposed to hard financial assets). That said, they're still quantifiable assets that can accrue value from multiple parties who can be aligned with a single onchain utility token.

Economic ≠ financial.

Utility tokens in this way don't necessarily represent financial assets to be transacted, but they are economic assets that quantify non-monetary value within an ecosystem.

For the record, just because our use case is sports fandom & reputation, I hope this post could be helpful for a range of soft assets across verticals: reputation as a contributor, cultural alignment, shared ownership in a non-treasury good, governance power, social capital accrued via follower graph strength, etc.

The considerations here really only apply to fungible tokens (ERC-20), but there may be lessons that can be abstracted to non-fungible tokens (ERC-721) or hybrids (ERC-404, ERC-1155) as well.

Disclaimer: this post is not intended to be a comprehensive look at designing utility tokens in general but rather is a look into how we’re designing a utility token at Bunches intended to quantify sports fandom. I truly hope it’s helpful to others thinking through similar use cases.

Supply Size

Let’s open with a broad blanket statement that is almost categorically false, shall we?

Increasingly, I’m convinced that consumer utility tokens should be infinite in supply.

For technical purposes, unlimited or infinite means 2²⁵⁶ - 1, as that's the largest value that totalSupply, which is an unsigned 256-bit integer, can be set to in a contract that meets the ERC-20 standard.

Unlike a financial system where there is a fixed money supply (M0, M1, M2, etc) controlled via a monetary policy by the issuer (the US Federal government, BRICS, the Bitcoin standard, etc.), consumer systems are comprised of infinite, non-zero-sum value exchanges.

Put another way: there is no theoretical limit to social capital (n² relationships, where participants can be infinite). Or reputation scores. Or power as implemented in governance.

In cases where soft assets seem to be limited, it’s only a practical limit (for instance, a social network can only be as large as the number of humans on Earth) or an infinite limit confined to an arbitrary scale (Uber driver and passenger scores - surely not all 5-star drivers are truly equivalent!).

I’m of the belief that the quantifying mechanism (tokens in this case) should mirror the value which is being quantified: financial or otherwise.

This is the point where farmers and flippers are both staunchly in disagreement. They behave and believe that the only way to build value is through systematically-imposed scarcity; they often assert that a fixed, limited supply leads to an increase in financial value and therefore a better system.

And that’s ok. I just disagree.

But even if you are in agreement that a token quantifying soft assets should be unlimited, it doesn’t mean that there aren’t drawbacks.

We absolutely should be thoughtful around the economics, even for soft assets. We need to take into account the fact that there may in fact be financial value at some point in the future.

Just like in the real world, hard assets may be traded for soft assets.

Avoiding the devaluation of soft assets is just as important as it is for financial assets.

Put another way, inflation is generally bad.

We can't just set the supply to an arbitrarily large number and call it a day.

Deflationary Utility

The biggest drawback of an unlimited utility token is not that it’s more difficult to financialize, but rather that such a large supply risks devaluing the thing you’re quantifying.

If there’s an unlimited amount of reputation, what does it truly mean to be in the top 10%? It could just mean that you’ve been around longer than the next person. Or that you’ve curated more cultural assets. Or that you’ve amassed more governing power simply by not exercising it.

So how do we handle inflation? By building deflationary properties into the token.

There are two main tokenomic mechanisms that we can use to control an unlimited supply token, one of which you may be familiar with and another that may be new.


The first mechanism that can be used to throttle supply is perhaps the most obvious: transactions themselves. This is also the most familiar, as it’s what Ethereum uses to control its own utility of compute power.

By building deflationary (“burn”) mechanisms into the usage of the token, you have some control over the supply as it’s used.

Whether it’s the actual spending of the token to perform an action or it’s a blanket expense to perform any transaction, building deflationary mechanisms into transactions are a clear way to control a token with infinite supply.

I won’t belabor this category, as you’re most likely already familiar, but here are a couple of examples:

  • Voting to kick a user “spends” your reputation

  • Proposing a change to a DAO uses your governance token

  • Burn fees for every transaction (similar to Ethereum)


I was recently listening to Stripe Press’ fantastic infrastructure podcast, Beneath the Surface, and the episode with Ryan Peterson (of Flexport fame) sent me down a shipping container rabbit hole.

Did you know that if a container sits for too long in port or on chassis without being moved, the owner of that container is charged a fee?

This fee is called a demurrage fee, and applies when the logistics network starts to choke due to the inactivity of a single actor. As it turns out, demurrage fees are also a concept in traditional, real world fiat currencies that require storage: precious metals, paper, etc.

That is, when resources are being accumulated and are underutilized, costs are incurred by those not using the resource.

This demurrage fee concept can also apply to social & utility tokens.

Like international goods, soft assets like culture, power, and reputation are meant to be in motion.

Soft assets are meant to be in motion.

In a thriving economy, value is circulated not hoarded.

If someone “hoards” reputation or governance or culture, and it sits underutilized, those actions eventually choke the network, which is meant to be constantly active.

So build demurrage into the system. If someone sits for too long, they get charged fees. Interest, if you will. And those tokens slowly drip back into circulation.

One caveat here: you have to be aware of the incentives that demurrage introduces into the system. Will people contribute in low-quality ways in order to avoid demurrage? This kind of dynamic has to be mitigated

Initial Token Allocation

At Bunches, we've opted to build a product and network first. And an economy second.

People > assets.

Therefore, another category of questions we're working through internally is that of kickstarting our economy: what's the entry point for users?

There are two ways to think about this: start the user at zero, and they earn their way as an actor in the system or start users on an equal playing field, and they gain or lose status as an actor in the system.

We're leaning towards the latter, where users start with an initial reputation score/balance, and can then earn or lose based on in-app behavior. Why?

Frankly, because this is the behavior we want to see in the world regarding social capital.

I believe that if you go around not trusting anyone until it's earned, your human experience is going to be lacking.

Assume reputation until proven otherwise...but when proven otherwise, there should be a steep cost.

Defending Soft Assets

Sybil attack. Two words that strike fear into the heart of tokenomists all over the world.

Maybe even more so.

The stakes are arguably higher with soft assets.

In a world where people aren’t seizing the network for financial gain, they want power or infamy or cultural control.

And often easier. There is no need for a 51% attack if you have a sufficiently large minority that you control on a cultural platform.

You could say that this is what’s happening on many social networks, with Meta and Twitter being overrun with pseudonymous spam accounts and fake news.

Simultaneously, eclipse and Finney attacks are much more impactful when the very thing being quantified and exchanged isn’t money but information. A “double spend” of information is trivially executed compared to a double spend of currency.

How then should we defend consumer networks?

You can’t.


Just kidding.

The short answer is similarly to how you defend financial networks from similar attacks.

Ensure costs to attack are sufficiently high enough to deter malicious actors.

Run tests and model usage with malicious actors. Adjust costs accordingly.

Perform identity validation within reason (phone > email or username login as a start, for example).

Use onchain social graphs as a trust proxy.

Utilize proven Sybil-resistant methods at the point of use (SumUp for content voting, EigenTrust for peer-to-peer reputation, etc).

Each one of these in isolation isn’t enough. But together they provide layers of protection for your network and for consumers.

What's Next?

Well, we're obviously implementing a lot of these ideas and thoughts at Bunches. Maybe I'll report back with learnings or lessons. Or you'll just see it onchain. ;)

Want to learn more, or just chat about this? Ping me! I primarily hang out on Farcaster these days, so feel free to send me a reply (thinking in public is great!) or shoot me a direct cast there.

Otherwise, thanks for reading! 🙏

Feel free to collect, tip, or otherwise share this content with others you think would appreciate it.

Decentralizing Moderation

I was recently speaking to the leadership team at a Fortune 500 about our work at Bunches (obviously they loved it 😏), and the largest category of questions was about a singular topic: moderation.

From Twitter's – I refuse to call it X – struggles with brand friendliness to Farcaster's growing pains to the philosophical debates of free speech vs. cancel culture, content policing is the topic du jour of many in the user-generated content (UGC) space.

Navigating the thorny fields of moderation is without question the number one issue facing consumer platforms today. The question isn't really "if" (yes, you should moderate in some way) or "when" (start as early as possible), but "how".

Rather than describing a methodology in the abstract, I'll talk about how we've tackled moderation at Bunches this far, and how we plan on evolving in the future.

On Bunches

Bunches is the social network for sports fans. Alongside single player experiences like scoreboard, there are group chats for discussing leagues, sports, teams, and even players.

With nearly 250,000 users and growing rapidly, our UGC is one of our most valuable assets...and one of our biggest risks, as moderation is one of our biggest challenges.

Go visit a random Instagram sports meme account comment section and you'll get a small taste of the content issues were facing. Or imagine your favorite pub, where people are a little heated and tipsy...but then make everyone pseudonymous. It's not civil. It's not pretty.

Welcome to sports in the digital realm.

Discussion about a rival user's family. Racial and homophobic insults and slurs. Constant innuendo not suitable for some younger audiences. Off-topic rants. Spam comments and scam invitations.

The list goes on.

Our Current Approaches

Currently, we do what most people would do in our situation...and in some cases, we take further steps than some would otherwise take:

  1. We have an allowlist/whitelist and a denylist/blacklist of words, insults, and links. We systematically check every message for these lists and moderate appropriately.

  2. We have implemented an AI automated image recognizer to detect pornographic or otherwise inappropriate imagery.

  3. We have user-centric tooling for blocking users or messages and managing what you see as an individual user. Users can also report individual messages, which are then reviewed manually by our team (via integration with a dedicated Slack channel).

  4. Systematically, we can moderate messages (muting them for being "off-topic") and Bunches team members can soft- or hard- delete messages altogether.

  5. Bunch owners can kick/ban users from individual group chats (Bunches), and Bunches team members can remove users from the platform at large...including banning on device ID (which is far more reliant than IP, which a simple VPN gets around).

Other Viable Approaches

This is absurd, to be honest.

Larger companies like ByteDance and Meta handle moderation with scaled-up versions of the above: throwing more tech and more people at the problem. These teams and systems handle moderation platform-wide across users and content.

Other companies like Reddit and Discord distribute the problem of moderation through administration and isolation: each server or subreddit is isolated from one another, with each fiefdom having its own moderation team that reviews and decides on content.

Either way, platforms to date have centralized the power of moderation to either platform-wide teams & systems (such as ByteDance and Meta) or to moderators in isolated communities (such as Reddit and Discord).

The problem with this is that moderation can fail in one of two ways: moderation attempts and systems fail either because of abandonment or abuse.


Moderators under-use their power. They fall asleep on the job (sometimes literally), or content sneaks through during "off-hours", or moderators have inconsistently applied rules, or automated moderation systems go down for a period of time. In any case, the responsibility of moderation is vacated by the centralized authorities in charge of it. This abandonment of responsibility leads to a noisy platform, distrust in moderation systems, and an opinion by the user base that the platform is incompetent.


Moderators over-use their power. Content is removed even when it technically follows the laid out rules of the platform, moderators exercise personal vendettas against users via their power, or entire communities have their voices silenced via malicious individuals, misaligned algorithms, or buggy code. This abuse of responsibility leads to a dying platform, distrust in moderation systems, and a contentious relationship between platform and user base.

Alignment: A Better Way Forward

Whereas centralized approaches to moderation fracture the relationship between users and platforms, decentralized moderation can align users and platforms.

Something that we're pursuing here at Bunches is what I believe will be a better way: decentralizing moderation to users themselves. I explain what this could actually look like below, but first the why.

I've said this many times, but o is nnot fundamentally a financial technology. web3 is fundamentally an economic technology. Tokenization is a phenomenal tool for aligning incentives between two or more parties who don't trust one another. After all, this was the primary problem that crypto originally solved (at least probabilistically).

In a world where both abandonment and abuse lead to a distrust in moderation systems and a fracture in the relationship between platform and user, aligning incentives around content moderation seems like a crucial problem to solve for consumer platforms.

By creating shared ownership, platforms can also share the responsibility of moderation.

What This Could Look Like


The first step is for the platform to determine and ideally quantify high-quality contributors. This could be done in a variety of ways, and can include both first-party and third-party data, but at it's most basic form you identify users who are creating and consuming content in a meaningful way. Questions to ask around this identification: Who sends the most messages? Who posts the most original content? Who reacts, likes, replies, or comments the most as a lurker or consumer of content? Whose social graph is high quality and growing? Establishing a rules engine for reputation is the zeroth step; implementing that rules engine via an on-chain mechanism is the next.

This can be done via tokens of any kind (or even other onchain mechanisms like attestations), and again can include both on-platform and external data (this is up to the platform to define), but identifying and quantifying contributors is the goal here. An example of this in action is something akin to Yup's Yup Score (docs here), but perhaps more specific to the content platform.


Once reputation is established for your user base, build moderation tools that require consensus from X% of relevant users.

While the rules for consensus may differ from platform to platform, no single user should have the authority to moderate a message or user. Perhaps you'd want to base the threshold for consensus on the total reputation that has access to the content. Perhaps you'd want to base the threshold on reach of the content, or that have a social connection to the original author, etc.

There are probably many permutations of the mechanism that would work, and experimentation would be necessary to get it right here on a per-platform basis.


Once consensus is reached for a moderation action by relevant users, the platform itself has to enforce the collective action via code (and smart contract when appropriate). There should be no additional human input required. If the threshold is met, the action is taken.

This immediacy accomplishes two things: it shows that the consensus mechanism has immediate effect and it shows that the users (not the platform's moderation team or algorithm) control what content is seen or distributed.


On Bunches, I've been toying with some of these concepts, and we have enough data internally to proceed with what I believe could be a very interesting model for consumer web3 companies like Bunches, Farcaster, Lens, etc.

In practice, this could be as simple as an upvote/downvote mechanism, weighted by the reputation score of each voter. If a threshold is hit (either in absolute terms or as a ratio), a moderation action against the content or against the user is taken.

The key is making these actions clear, easy, and intentional. Users should not accidentally ban a user, delete a message, or time someone out. Nor should users have to read pages of documentation to figure out how to do so.

What's Next?

Well, we're building this at Bunches. It's a real problem, and I believe we have a real solution for it.

Want to learn more, or just chat about this? Ping me! I primarily hang out on Farcaster these days, so feel free to send me a reply (thinking in public is great!) or shoot me a direct cast there.

Otherwise, thanks for reading! 🙏

Feel free to collect, tip, or otherwise share this content with others you think would appreciate it.