A black and white photo of two people sitting on a ledge.

Building a Culture Where Feedback Actually Matters


So many people tell me they’re nervous going into their review meeting,” says Gretchen Duhaime, who facilitates peer reviews at Last Call. “But after? They’re relieved, energized. When you have the right people in the room giving you specific, actionable feedback, it doesn’t feel like judgment. It feels like support.

That transformation happens because of how we’ve structured our peer review process. At Last Call, we have no people managers, which means there’s no single person responsible for evaluating your performance. Instead, we’ve built a process where feedback comes from the people who work alongside you.

We’re a fully-remote team of about 50 people, building digital solutions for impact-driven organizations. We organize around expertise and function rather than management hierarchies. Some of us work in competency groups focused on design, engineering, or product & delivery, while others work in marketing, finance, people operations, and other areas. Your collaborators can change based on what you’re working on.

So the people positioned to give you the most meaningful feedback are the ones seeing your work day-to-day. But how do you gather that feedback and turn it into something actionable - something that actually helps you improve?

The Scattered Feedback Problem

In a healthy, communicative team, people are always sharing feedback. We’re an agile shop, so feedback is baked into how we work. In a sprint retro, someone captures an observation about how you facilitated a difficult conversation. You get notes on your code in a pull request review. A designer leaves comments on your wireframes. An account strategist sends a thoughtful note about how you handled scope creep.

That feedback is valuable, but it’s scattered. When thinking about your professional growth, you’d have to reconstruct it from memory or hunt through half a dozen different apps. More importantly, it’s incomplete. The designer commenting on a wireframe didn’t watch you facilitate a workshop with a difficult stakeholder. The engineer reviewing your code doesn’t know how you helped another developer work through a complex problem. The account strategist who sees your planning skills might never see your technical decision-making.

We needed a way to gather feedback from the right people, the ones who actually see different facets of your work, and transform it into a clear development plan that you could implement as part of an ongoing focus on getting better.

Our philosophy is simple: we strive for excellence by committing to continuous improvement at a sustainable pace. That means making incremental progress, ensuring each day is better than the last while planning for even greater gains tomorrow. It’s about defining what excellence looks like, getting honest feedback on where you stand, and making a plan to close the gap.

Defining What Excellence Looks Like

Improvement starts with a clear picture of where you want to go. That’s why we invested significant time defining competency matrices for our three core competency groups: Design, Engineering, and Product & Delivery. Each matrix outlines three levels of competency and describes the specific behaviors someone exhibits at each level.

Take our Engineering matrix. A Level I engineer can “pick up smaller, well-defined tasks and finish them solo.” A Level II engineer can “pick up medium, loosely defined tasks and finish them solo” and “ensures tasks are prioritized correctly.” A Level III engineer “picks up large, complex tasks and refines them into more easily solvable subtasks” and “translates business requirements into technical specifications that meet cost and time limitations.”

The Product & Delivery matrix follows the same pattern. For team facilitation, Level I is about “facilitating team events using standard formats.” Level II means you can “adapt processes for team effectiveness across contexts” and “create inclusive environments.” Level III is where you “develop innovative approaches to team leadership” and “coach others on effective facilitation.”

“When I was working toward Level II, the matrix gave me a clear roadmap,” says Cody Glassman, a software engineer at Last Call. “I could see all the different areas where I needed to grow - like writing code with testability in mind, ensuring tasks are prioritized correctly, and understanding how my work fits into the bigger picture. Having those specific behaviors spelled out made it much easier to figure out where to focus.”

For roles that don’t fit into one of our three competency groups, we use the GWC framework from EOS (Entrepreneurial Operating System): Get it, Want it, Capacity to do it. Do you understand your role and its accountabilities? Do you genuinely want to do this work? Do you have the capacity - the mental, emotional, and intellectual capabilities, along with the time and knowledge - to do the job well?

Beyond competencies and accountabilities, everyone at Last Call is also assessed against our seven core values. Because being excellent at your craft matters, but so does how you show up for your teammates and our clients.

These frameworks create shared language: a way for everyone to understand what we’re aiming for and where we each stand on that journey. They also become prompts for better feedback. Instead of vague observations like “you’re doing good technical work,” your peers can point to specific behaviors: “I’ve seen you pick up open-ended tasks and finish them, like that huge front-end ticket last month.” The frameworks give people concrete things to look for and clear ways to articulate what they observe.

But those observations only matter if they’re coming from people who actually see your work. That’s why who sits on your review team is so important.

You Choose Your Review Team

Here’s where our process diverges from traditional performance reviews: the person being reviewed (we call them the Focus Person) has a voice in selecting their review team.

That requires trust that the Focus Person will use their agency to get actionable feedback rather than seeking the easiest path. Our process guides you toward selecting people who work with you most closely, see your work in action, and can provide informed, relevant input. The key is that feedback comes from people with genuine insight into your contributions, and are part of your network of accountability (those to whom you regularly make commitments and who regularly make commitments to you).

For perspectives that matter but wouldn’t fit in the review meeting itself, we have a supplemental feedback form. Maybe you want input from a client stakeholder who sees your communication skills, or a vendor partner who collaborated with you on a complex integration, or a former project team member who’s moved to a different role. The Focus Person can request supplemental feedback from anyone whose observations would be valuable, even if they’re not appropriate for the core review team. That feedback becomes additional input for their development.

I lead two groups at Last Call, plus I covered for a product manager during her parental leave this year,” says Shawn Mishler. “When I started thinking about my review team, I realized how fragmented my work had become. Who actually saw me lead? Who saw me deliver on that coverage project? I couldn’t just pick four people I work with a lot - I had to be strategic about who could speak to what. The supplemental feedback form helped too - I sent it to twelve people who had pieces of the picture but weren’t core to the conversation. It forced me to map out my network of accountability in a way I’d never really done before.

Mapping your network of accountability is the first step. The review meeting itself is where those carefully chosen perspectives come together.

What Actually Happens in the Review Meeting

We’ve adapted much of this process from Sociocracy for All, whose performance review approach puts the Focus Person in charge of their own development. A trained, neutral facilitator creates safety for honest conversation. The Focus Person plays an active role: sharing context about their accountabilities, and they start and end each feedback round. The rounds focus on strengths first, then areas for growth. This structure transforms what could feel like judgment into genuine support because you’re actively gathering input from people invested in your success.

The feedback itself needs to be specific and actionable. We use principles from Nonviolent Communication to encourage collaboration and reduce defensiveness. Instead of “Your documentation is always unclear,” you hear: “When I read the spec you wrote for the orchestration layer, I had to ask you three follow-up questions to understand why you chose that approach. I need more context upfront to work efficiently. Would you be willing to include a brief overview section that explains the why behind the decision?” This framing helps take you out of explaining mode and into problem-solving mode.

Then the group collaborates to help the Focus Person think about what’s next. Someone recommends a book that helped them with a similar challenge. Another person suggests a course or certification. The team identifies potential blockers. Maybe you need more exposure to a certain type of project, or dedicated time for learning that needs to be factored into sprint planning. By the end, the Focus Person leaves fueled with ideas and resources.

Turning Feedback Into Action

These ideas become commitments to continued improvement when you create your Professional Development Plan. A good PDP balances quick wins with longer-term growth, gets specific about actions and resources, and sets realistic timelines. Most importantly, it acknowledges reality: you have other “important” work to do, and professional development will get pushed aside unless it’s treated as real work. If you need dedicated time for a certification program or deep learning, that gets planned for upfront and built into your schedule alongside other commitments.

Drafting a PDP can feel daunting, so we provide a library of frameworks to help structure your goals. Your review team can talk through what makes sense given where you are and where you’re headed. People Ops staff can help you choose which framework best fits a particular goal, and provide broader guidance on professional development. Once drafted, the plan goes back to your review team for input and consent, ensuring it’s both achievable and aligned with what came up in the review meeting.

Connecting Growth to Compensation

The process doesn’t stop with creating a plan. After the PDP is finalized, you meet with your group leader for a wrap-up session to debrief the experience. This is also where level changes get discussed for members of our competency groups - whether you’ve demonstrated the skills and behaviors to move from Level I to Level II, for example.

Then the review goes to the Compensation Committee to set your pay increase. Growth matters and gets recognized.
Just as importantly, your PDP becomes your guide. You use it to reflect on your own goals and adjust your course. There’s no one evaluating whether you checked off every item. The PDP helps you stay intentional about your development rather than letting it drift.

Why This Works

Our peer review process works because it’s embedded in how we operate. The feedback comes from people who see your work. The assessment criteria are transparent and shared. The structured facilitation enables honest conversation. And the entire process is oriented toward one question: how do we help each person get better at their craft and increase their contribution to the team?

“I’ve worked places where I got good feedback, so I had high expectations coming in,” says Madison Slattery, Sales & Marketing Coordinator. “In my review, my team recognized the work I was doing - like figuring out how to add braille to make our holiday card accessible. But they also told me I need to trust my instincts more and take more risks. And they didn’t just say it, they gave me concrete ideas for how to actually work on it. You need both recognition and honest feedback about where and how to push yourself.”

We practice what we preach about continuous improvement, including on this process itself. Competency groups iterate on their matrices as they learn what distinguishes levels. We added a summary of the previous year’s PDP to review meetings so development plans stay connected year to year. People Ops built a Jira project to track reviews and ensure nothing falls through the cracks. Our Government Solutions group added three-month reviews for new hires to give faster feedback when it matters most. We’re even evolving the name from Peer Review to Individual Retrospective - it better reflects what actually happens: a retro focused on an individual, with their collaborators, oriented toward learning and improvement. That’s what happens when you build a performance process around how people actually work.