Are Performance Reviews Mostly Theater?
And why they describe more than they measure
Performance reviews tend to follow a familiar pattern.
Once a year, you sit down with your manager. Goals are discussed and written down. They often sound reasonable, but also slightly abstract. Things like client value, leadership, development. At that point, it is not always entirely clear what is expected of you in practice.
Then you work, more or less like before. Projects move forward, problems get solved, clients are kept reasonably satisfied, and some things quietly improve without much visibility. If the goals happen to include something concrete, you may try to align with it. But in most cases, the work is still driven by the situation rather than the wording of the targets.
Toward the end of the cycle, you are asked to summarize what you did. You write a structured description of how your work aligns with the goals. If you have done this a few times, you know how to make it sound coherent.
After that, the process moves somewhere else. A group of managers compares people, discusses outcomes, and decides who gets promoted, who receives a salary increase, and who continues more or less as before. The criteria are visible, but the actual decision-making is usually a black box for anyone not in the room.
At this point, it is reasonable to ask what the process is actually optimizing for.
My own experience with this has been somewhat mixed. A significant part of my career has happened without formal performance targets beyond basic things like client work and utilization. At other times, I have had clearly defined but fairly abstract goals similar to most large consulting organizations. Looking at both sides, the difference in how the work itself felt was not particularly large. The difference was mainly in how that work was later described and interpreted.
This raises a few practical questions. Do performance reviews actually measure the right things, or mainly what is easiest to describe? Do they meaningfully influence how people work, or do most experienced professionals continue to focus on what the situation requires?
And how much do they really matter for promotions and salary decisions? In many cases, the outcome seems to reflect a broader view that has already formed over time.
More broadly, who is the process actually for? Does it genuinely help anyone, or is it mainly a structured way to make the outcome look consistent?
Why Performance Reviews Need Structure
Organizations are not running performance reviews because they enjoy them.
They need a way to make decisions about people that can be explained and defended. Promotions, salary increases, and role changes create expectations and comparisons. Without some structure, decisions would feel arbitrary, even when they are reasonable.
So frameworks are introduced. Competencies are defined. Evaluation criteria are linked to strategy. The intention is straightforward: create a common language that connects individual contribution to organizational goals.
In smaller organizations, this often remains relatively informal. Decisions are based on direct knowledge of people’s work. In larger organizations, distance increases. Not everyone sees everyone else’s contribution in detail, and decisions need to be aligned across teams.
There is a clear logic behind this. When work becomes complex and distributed, structure is introduced to make it more manageable. The result is a system that is less about precise measurement and more about creating a shared interpretation that holds together across the organization.
The question is how well that structure actually captures what is happening in practice.
Why It Can Feel Like Theater
The intention behind performance reviews is reasonable, but the experience can feel different.
On paper, the process looks structured and consistent. In practice, it often feels slightly detached from how work actually happens. Not completely wrong, but not quite aligned either.
The gap comes from a few recurring challenges.
The first is the level of abstraction. The metrics exist, but they are often too high-level to capture how value is created in practice. Different types of contribution can end up competing inside the same category, even though they are not directly comparable.
The second is behavioral. In theory, performance frameworks should guide how people work. In practice, most experienced consultants focus on what the situation requires. Client needs, project constraints, and real-world problems tend to override abstract targets. The evaluation system follows the work, not the other way around.
The third is the link to decisions. Performance reviews are formally tied to promotions and salary changes, but the actual decisions often reflect a broader view: accumulated contribution, reputation, role evolution, and perceived potential. The review summarizes that view, but rarely creates it.
In situations where many people are performing well, the distinction becomes even less precise. When several candidates could reasonably be considered strong performers, the outcome is shaped through discussion rather than measurement. The ability to explain and advocate for a person’s contribution starts to matter. Visibility, credibility, and influence inside the organization can tilt the outcome, even when the formal criteria are the same.
This is where the process can start to feel like theater.
The structure is formal, the criteria are defined, and the outcome is presented as if it follows directly from the inputs. But in reality, much of the decision has already taken shape before the formal evaluation. The review becomes a way to describe and justify that outcome in a consistent format.
Nothing in the process is fake. The work is real, and the decisions are real. But the connection between the two is not always as direct as it appears.
How to Work With the System
Trying to optimize the performance review itself usually has limited effect. By the time the discussion happens, most of the interpretation has already formed.
What tends to matter more is how easy your contribution is to interpret during the year.
This does not require gaming the system. In practice, it often means making the work visible in concrete terms. Not just what was done, but what changed because of it. Connecting your work to outcomes that others recognize, and making sure the right people understand why it mattered.
Some types of work need more translation than others. If the value is indirect, it helps to make the reasoning visible while the work is happening, not only at the end of the year. Short summaries, clear explanations, and shared material tend to travel better than the original work itself.
It also helps to accept the limits of the system. Not everything valuable will be captured perfectly, and not every evaluation will feel precise.
Over time, many experienced professionals adjust their perspective. The performance review becomes less of a measurement and more of a summary of how their contribution has been understood inside the organization.
A More Useful Way to Look at Performance Reviews
Seen this way, performance reviews are not purely theater, but they are also not precise instruments.
They are a structured way to make sense of something that is inherently difficult to measure. They create a narrative that connects individual work to organizational logic, even if the connection is sometimes imperfect.
So performance reviews are not meaningless, but they describe reality rather than measure it.
And in most cases, the underlying pattern remains simple. Contribution that is easier to interpret is also easier to reward.
It does not mean the system needs to be taken too seriously, but it usually makes sense to understand how it works—and make sure your contribution can be understood within it.
📘 Explore the Topic Further
If this perspective resonates, I explore the same theme in more detail in my Senior Expert Playbook series.
The Senior Expert Career Playbook looks at how expert careers actually develop in practice—covering positioning, visibility, and how contribution becomes understood inside organizations.
The Senior Expert Pay Playbook builds on that and explains how compensation typically forms through perceived impact, trust, and structural alignment, not just performance reviews or negotiation.
Together, the books focus on a simple idea: your work matters, but how it is interpreted often matters just as much.
📚 Related Reads from the IT Consulting Career Hub
If this topic resonated with you, you might also enjoy:
👨💻About the Author
Eetu Niemi is an enterprise architect, consultant, and author.
Follow him elsewhere: Homepage | LinkedIn | Substack (enterprise architecture ) | Medium (writing) | Homepage (FI)
Books: Enterprise Architecture | The Senior Expert Career Playbook | Technology Consultant Fast Track | Successful Technology Consulting | Kokonaisarkkitehtuuri (FI) | Pohjoisen tie (FI) | Little Cthulhu’s Breakfast Time
Web resources: Enterprise Architecture Info Package (FI)





