Chris Wu

What Pareto Doesn't Say

In 1971, Italian economist Vilfredo Pareto made the simple observation that 20% of the population owned 80% of the land. Originally meant as an observation about centralization of wealth in an economy, this observation has evolved into one of the most quoted, most misunderstood rules of thumb.

The 80-20 rule generally states that 80% of the effects in a system can be attributed to 20% of the agents in said system. Illustrative examples of this are:

  • 80% of bugs come from 20% of the features
  • 80% of a company's revenue comes from 20% of the customers
  • 80% of the world's health is held by 20% of the population

If your system follows this pattern then it is very powerful when trying to optimize effort over a large system. Focus on those 20% features with fixes or features cuts, offer bonuses and premium service to your best customers - and so on.

Let us not forget that rules of thumb are approximations. They are at best social science, at worst guessing. So what are the limits of this model of thinking? I see three issues: data that doesn't follow the model, inability to actually use the model and incorrect interpretation of the model.

First, not every system follows the Pareto principle. Pareto works when items follow a power law distribution. If you are an established publisher then your #1 bestseller will likely dwarf even your #2 bestseller. If you are an independent bookseller, your sales across titles will tend to be flat. This is critical if you are trying to optimize sales effort on 20% of titles and expect 80% return. A simple way to address this is to validate ahead of time that you are, in fact, working with an 80-20 situation.

More subtle is a slavish adherence to the actual 80-20 numbers. The actual 80 and 20 are no more special than 60-30 - yes the numbers don't have to add up to 100! In a 60-30 you can asset that 60% of your revenue is from 30% of your customers. This is still a very useful conclusion as it still allows for a focus optimization on that 30% with an disproportionately high expected payoff.

Most importantly is not to assume that the Pareto effect applies when your distribution doesn't follow the inverse power law (say when things are more of less level). Visually, when the thing you care about looks like this:

then you're good. If it looks like this:

then you probably shouldn't assume you can pull some 80-20 optimization.

Secondly, you can't always apply more gas to a certain set of customers (or bugs etc). If you have a set of customers that are accounting for much of your revenue, you might have tapped them out. The principle only assert something about how the data is distributed but not what you can do to affect change.

Overly focusing on the top 20% means that you might miss out on increase revenue in your bottom 80%. If you can get 50% more revenue from that bottom 80%, that's still 10% revenue growth, which can be pretty good. Who knows, you might actually be one of these.

The third mistake I see is when there is the complete misapplication of the principle to how at task might be done. Imagine a software project that is expected to take one year. Someone might suggest that in the past they've observed "people really only need 20% of the features to get 80% of the value". While incredibly hard to measure (not all features take the same time but I digress), it is a conceivably true statement. They might continue with the suggestion "let's try and prioritize the 20% features and then we could stop development after 3 months".

Here the Pareto principle implies is that we can unlock significant customer value at only a small fraction of the work! Amazing! Here's the problem: that's not what it says. Pareto only works when you look at the system as a whole with full information. You can, after the work is complete, list every feature and compare customer use them conclude that the completed project satisfies the Pareto principle. What you cannot do is assume that what you identify on the fly is that critical 20%.

Because all projects - software or otherwise - are performed under imperfect information, no person can know a priori what tasks will be part of that effective 20%. Lean (or any iterative method of building) only advises you to build end-to-end experiences to validate value. It does not assert that it is possible to prioritize the highest value elements first - only the riskiest ones.

Making coherent product experiences means picking some non-critical features to support the critical ones. Going in with a Pareto mindset with ensure that those end up on the cutting floor. Proper agile, lean or basic-logic development means delivering a stunted but effective release to test out the viability of the product or technology - the so-called cupcake release. This effectiveness is often marred or even nullified if they don't contain basic functionality (e.g. ability to sign up via password, enter some personal information, etc).

The Pareto principle is an interesting observation but is prone to overuse just like any other mental model. We should always remember that it only states something about how thing are distributed but can't help you change a system while it is being played out. It's best limited to post-hoc analysis and not as some panacea for doing the minimum possible work.

Being Fired

This post is inspired by Zach Holman's brave post. I've long enjoyed Holman's blog, talks and other contributions even though I've been torn on Github as a company ever since their meritocracy chatter.

While our experiences were very different, his post was very familiar. I recall all of his observations: the constant "let's go for drinks", the embarrassment and the stigma. The feeling that somehow being rejected by this one company taints you from all companies was illogical and yet somehow the default state for our society.

I don't really have any sort of earth-shattering insights from that time. This is nothing more than a set of desultory observations from that time that perhaps someone in a similar position would find comforting.

You're not in control anymore

I've always done pretty well. I was never the smartest guy in the classroom but as the stereotyping son of Asian immigrants, I always did pretty well. Good grades, a good university, grad school, good job, good performance reviews and so on. Until suddenly not. I am reminded of a computational biology grad student that took an algorithms class I taught - he got a B. This is our conversation summarized

Him: But I'm an A student

Me: Not anymore.

And that was my first lesson: You're something until you're suddenly not. Up until that time, I had always been in control of my fate: choice of schools, jobs, technologies. Without ever really appreciating it, it hit me like a ton of bricks when it was gone.

In tradition of stoicism, it was good to learn that having the choice taken away from me wasn't a unique occurrence. On a daily basis, dozens of my actions are a result of constraints far outside of my control. This just happened to be a big noticeable one.

It's not fair

No, it's not. It's pointless for me to rehash the various complaints I had about the logic of losing my job. The only salient point that holds up to scrutiny is that it's not fair and that's just too bad.

I do believe that success and skills are correlated, but only just so. The smallest correlation with the faintest whiff of significance in the statistical sense. We believe those that are successful or more senior are skillful. I'm no Nassim Taleb but it's clear to me now that's not true. So if you're expecting not to be fired just because you perceived yourself as more skilled, that's not how it works.

Perhaps it's a cynical lesson to be learned but I realized that regardless of how well things seem to be going, you can't ever really be certain that things are going to work out. I'm not suggesting that one should be paranoid but rather to always be aware that unfair things happen to everyone. Like the proverbial boy scout, be prepared.

People will surprise you

For a while, people will want to buy you a lot of drinks. Once they become acclimated with your new state of unemployment, they'll stop caring.

They'll grow wearing of your complaining and wonder "how long is he going to wallow?". Friends that couldn't imagine working without you, they'll find a way. Friends who claim they will quit, just won't. Those that say they will speak up for you after you're gone, won't. The sympathy just dries up.

People are genuine regarding their affection for you, but it's part of the human condition. We can't imagine life differently than it is now. So just as you'd never imagined being unemployed, they also never really thought about how easy it'd be to get along without you.

Conversely, people will also surprise you with their generosity. I was shocked at the number of near strangers that sent me introductions to everyone under the sun they knew who might be "looking for someone with my talents". You can always tell someone that has experienced being let go because they know.

It won't seem like it when you in the depths of the rejection period but most people are actually quite understanding. In the end, I became friends with a considerable number of acquaintances who came to my aid.

Yes, people will judge you

Here's where the stigma is very real. When people find out that you were fired, they always pause. I seem to recall after the post was tweeted someone said something I'd paraphrase as

Imagine if everyone who ever been dumped was subject to such scrutiny: You were dumped? There must be something wrong with you.

This was something I experienced first hand as my employment options started picking up. My references time and again would say "they really seem hung up on the firing thing". I can see their point of view: if you have a great employee, you do whatever you can do keep them. Ergo this person must be trouble.

I can't dispute that I can be a difficult person at times. However, this logic has very real impact on companies doing hiring. This reasoning isn't valid if your company is different. What works for one place, won't work for another. This is also true for employees. One man's troublemaker is another's culture fit. So an phantom mismatch could be robbing organizations of perfect good candidates.

Do something with that time

Making sure you have side projects and productive activities are key for two reasons. First, it's great for mental health. You go from being busy for a full day to looking for a job. Looking for a job isn't a full-time task. After scrubbing LinkedIn, you've basically got all the recruiters covered (since that's all they do). Then you're randomly Googling local companies of interest. Perhaps attend a meetup or two after that? That's not 8 hours a day.

This leaves you with hours to do something worthwhile. Catch up on reading, learn a new skill, cook, paint, whatever. Because here's the second reason: you'll actually look back at that time as a great missed opportunity.

The truth is that (almost) everyone eventually lands on their feet - especially in tech. Which means that this time will have been great for working on all the things you've always wanted to do. The key is to view this time as a blessing in disguise which, admittedly, is easier said than done.


While it was a challenging time, I count myself very lucky to have been part of an industry with high employment, a supportive spouse and not being taxed financially. I have great sympathy for the countless others who aren't so lucky.

I guess advice can come in two flavors depending on how much of a positive thinker you are (I stand behind both). If you're a happy person, then know that "this too shall pass". If you're more of a pessimist, then know that "dumber people than you find better jobs. You'll be fine."

A Corollary of the Peter Principle

The Peter Principle is a well-known management theory that posits that employees that are eventually promoted to a level where they are no longer successful. Colourfully called their "level of incompetence", this is the first point past the employee's capability.

While the book, as I understand it, is somewhat tongue in cheek (more here), it certainly feels truthy. It's certainly anecdotally true when we see a manager who struggles or a CTO that clearly doesn't know how to lead a department. Someone performs quite well at a mid to mid-senior position, gets promoted then doesn't excel at their new senior position. The cause is asserted to be the change in responsibility or mastery of a skill that is now beyond the capabilities of the employee.

This idea has made strong roots in the software industry since most senior roles not longer require writing code which is the main activity of most of the entry-level to intermediate workforce. The industry as a whole is struggling with what to do with its senior engineers.

So conversely I wonder:

Are we eliminating great potential managers because they can't code?

The current natural progression in most career path is the same. Do something (Junior-Intermediate Widgeter). Do something at a senior level (Senior Widgeter). Optionally, do it even more senior (Senior Widget Architect) Becoming some kind of manager of that thing (Director of Widget Production). Then manage people who manage people who do that thing (VP Widgets).

However, once one moves past the production phase of the career path, the management path requires a different set of skills. This is evidenced by the myriad management books that start by advising you to remember "you don't do that job anymore". These new skills often involve time management, stakeholder/customer management and managing a team. These are distinct from the skills that earned the promotion in the first place.

So the status quo has us choosing from a pool of people with great production skills but possibly not the best candidates for the managerial skills. While I admit that being a great developers helps to be a great project manager, I wonder how integral it is.

Can we conceive of truly flat workplace where even though someone might manage a team but not be considered higher than the team members? If so then we might be able to evolve to a place where people can be groomed for managerial roles rather than thrust into them with a jarring change in responsibility. This would help open up the pool of candidates which is ultimately the starting point of better employees.

Retrospection

We've all been part of projects that didn't go well. Despite our best efforts project often go wrong: poor decisions, bad relationship management or just bad timing. To this day, I don't understand how companies still don't have rigorous standards for retrospecting on both successful and unsuccessful projects. I say standards here because the process itself is irrelevant so long as the root cause it reached. I don't think I can express how hard this can be for an organization better than Hess.

When pressed on the issue most people don't deny that retrospection is important but present excuses or tactics that delay or outright stop the process. As we know, delaying anything decreases a human's ability to recall facts.

So what are these issues?

It's in the past

This argument touches on the basic discomfort people have with reflecting on painful events. It posits, usually implicitly, that looking back on painful events doesn't serve any purpose since what has been done can't be undone.

The issue with this is that it's patently false. Examining issues in projects and process are great opportunity to see what assumptions and decisions were incorrect. This leads to a chance to prevent those errors a second time.

Further, retrospection doesn't always mean that the conclusion will be negative. It is conceivable that given the assumptions known at the time, all parties made the correct decisions - it does happen!

Everyone did the best they could

This argument is dangerous because it starts by setting the context for discussion as "the competence of those involved". As such it's good to clarify at the start that we're not critiquing the intent or competence of the actors but the actions within the context.

Like the helicopter parent, an overly protective environment that robs people of learning opportunities under the guise of ego preservation is very dangerous. Managers who allows this are essentially saying their employees, and possibly themselves, are unwilling to look bad and would rather look ignorant instead.

The Blame Game

Don't think this

Blame carries such a negative connotation that the above is all people think about when they hear it. We should change the discussion to one of responsibility and to remind people that, ultimately, someone has to be responsible for every action and decision.

I'm also in the camp of people that believe that the culture of "we're all responsible for X" is terrible and fraught with peril. When everyone owns a thing, no one owns it. Witness any group email you've ever received where there are tasks to be done and no clear mandates.

Blame isn't about firing people or making them bear the full brunt of a failure. Rather it's about letting the responsible party know why they were responsible and how they dropped the ball. In the future, if things are done right, they know "ah, this is my responsibility".

Dog and Pony Retros

I've seen this a couple of times too. Though I don't blame Agile methodologies for this, but Agile process has so many tools to deal with retrospecting on things that people just use the tool without understanding them.

Here we have meetings where, ostensibly, we're trying to get to the root cause but for a number of reasons it doesn't happen:

  • Haven't gathered the right data.
  • Don't have the right people in the room: If even one key person is missing then you won't know what information or context is missing.
  • The power gap is too high: If junior project members are intimidated, this squelches conversation.

The Dog and Pony then concludes (figuratively) with "ok, we've been in this room for hours and look at all these post-its. Closure achieved."


So we take the high road, go through the discomfort of honest and brutal feedback, and reach the root cause. Now what? What are the actual gains we're looking for when we retrospect and challenge outselves.

Everyone Learns

If we've effectively found the cause of our problem, now we can start looking for a solution. Avoiding bad stuff is often one of the most productive actions. We have a tendency to try and pick the best choice from a group of alternatives but often every single non-bad choice contains good elements (see Ruth Chang's TED talk for more).

On the other hand, bad choices usually have a very clear bad quality to them. Identifying this bad quality is arguably more helpful than validating a good, but possibly sub-bar, decision.

No Bad Myths

This is an observation I've made over my career which isn't apparently until a fair amount of time has passed on a bad project and that's the project myth.

The birth of the myth goes something like this: The team is composed of various people from various departments - sales, engineering, management, what have you. Projects go south and it's often everyone's fault a little bit: a missed deadling because of bad code, an unrealistic promise made during a deal or perhaps just poor requirements gathering.

Without a deep painful look into the why of the failure, the team just ignores it. However, each member still has their biases and blames some other party for the failure. Without the ability to have a frank and open conversation with all parties, everyone creates their own version of the truth.

"Sales set us up with a bad relationship."

"Engineering's estimates were off."

These cautionary myths breed distrust and destroys company culture. They are doubly dangerous because they are so far removed from the event - often coming to light only months later in discussion - that people have forgotten their true origin.


I can't say that I've been part of any company that gets it right every time but I have experienced some really good retrospective activities. I think every company should decide what works for them but I think that good reviews have common elements:

  • prioritize the root cause and its solution above all else
  • get all parties to agree on facts before discussion
  • are clear about responsibility and blame
  • address power imbalances among the team members so that quieter members aren't steamrolled

At times in the past, I think I took the default mode of thinking "why bother after the fact?" as well. But if true ability to succeed comes from grit, then the only way to build that is to understand our failures and improve on them.

Product Manager, Concept

Product management is definitely one of more fashionable jobs right now (see also: data scientist, manager of customer success). The role has been around for quite some time, long enough that there are entire educational philosophies organized around it. The new wave of software-only PMs are bringing a more technical slant and a renewed excitement. As a PM myself, I'm certainly not opposed to seeing more product managers bringing additional focus and leadership to teams building good product experiences.

However, a by-product of this popularity I'm noticing that more and more companies are trying to slot legitimate existing jobs into some kind of product management role.

Witness the birth of (and these are all real and can be found without leaving Downtown Toronto):

  • Product Manager, Monetization
  • Product Manager, Growth
  • Product Manager, Design
  • Product Manager, User Experience
  • Product Manager, BI
  • Product Manager, Data

Contrast this with the more vanilla "Product Manager" or even something like "Product Manager, Mobile" and it just feels wrong.

These domains (monetization, growth, etc) are really just product management activities traditionally been shared with others (e.g. a business analyst, a marketer, designer and so on). Without really knowing what motivations (capital "M") Management might have for creating such role, I don't really see how these roles contribute to the good of the company strategy.

PMs need to manage... products. The point of being a PM is to understand the customer and shepherd the creation of something (hopefully great, but at least good) that solves that problem. Yes, this will involve things like business, design and data but they're only part of the picture. These PM-y titles above aren't solving a problem for the user, they're solving one for the organization.

The additional roles cause friction with the product managers with actual product responsibilities. He or she now collaborates with a designer but also has an additional stakeholder of equal rank with either conflicting interests (displaying poor corporate consistency) or no conflicting interests (so... what are they for?).

If the existence of such roles is due to the desire to put increase focus on such goals, then communicate that to the product team. If increased growth is a corporate priority, then everyone should be working towards that goal. If you need to elevate one of these goals to the extent that you want to name someone "Product Manager, Growth" then perhaps the Product team just isn't doing a very good job.