How to Synthesise Stakeholder Submissions Without Losing Source Traceability

Synthesise stakeholder submissions with source IDs, coding, framework matrices, and QA for traceable, defensible reporting.

Stakeholder submission sets create a specific kind of analysis problem.

The team is usually dealing with unlike response channels, high-volume text, subgroup differences, public scrutiny, and later drafting pressure all at once. The job is not only to identify themes. It is to compare responses in a way that still holds up when someone asks where a finding came from.

This guide shows how to run that process with the Framework Method: intake, source IDs, familiarisation, coding, matrixing, drafting inputs, and final QA.

Key takeaways

  • What this page helps with: running a Framework Method workflow that keeps response channels, codebook logic, and matrix outputs visible under review.
  • Who it is for: teams handling consultation responses, letters, survey comments, workshop records, or petitions that need to feed a defensible draft.
  • What changes when it is done well: the submission set turns into a traceable matrix and drafting input instead of a pile of responses that has to be re-opened late.

Before you start

This workflow is a strong fit when:

  • the team is handling a large submission or consultation set
  • multiple response channels are involved
  • findings need to feed a report, briefing, recommendations, or policy draft
  • reviewers will ask where a statement came from
  • the work needs to hold up after staff handovers or review rounds

Before you begin, make sure you have:

  • one folder or repository for the live source base
  • a master catalogue or intake register
  • stable source IDs and naming rules
  • a decision on the unit of analysis for coding
  • one owner for the working codebook and QA process

Why the Framework Method fits this workflow

The Framework Method is a well-established applied qualitative approach for working across multi-participant datasets through a matrix with rows as cases and columns as codes or themes Gale et al.. That is exactly why it fits stakeholder submissions so well: the team can compare across cases without losing the line back to each source. That method specificity is the point of this article. It is not the general evidence-workflow guide; it is the submission-synthesis playbook.

Practical guidance also stresses the control layer around the method itself: a master catalogue, stable source IDs, naming rules, version control, a research diary, and a maintained codebook ACI best-practice guide Cochrane Handbook.

Consultation-analysis guidance adds an operational warning: keep unlike response channels visible, record cleaning decisions, and explain campaign-response, subgroup, or weighting caveats rather than flattening everything into one pool Office for Students guide Nottinghamshire consultation guide.

On this site, the commercial pattern is already visible in live work. The South African Local Government White Paper workflow used source locators and quote fields inside a high-volume public-submission system. The UNICEF Zambia evidence workflow used quote-per-claim guardrails and spreadsheet traceability. The UNICEF Palestine situation analysis linked quotes, coded issues, service-access records, and theory-of-change layers so drafting could happen from organised evidence instead of raw notes.

Steps overview

  1. Separate the response channels and clarify the output
  2. Build the master catalogue and source IDs
  3. Clean the submission set before coding
  4. Read through the material and write early memos
  5. Draft the initial coding framework and codebook
  6. Pilot the coding and align the team
  7. Code the full set with traceable references
  8. Chart the data into a matrix and compare across cases
  9. Draft findings and QA the evidence trail
Step 1

Separate the response channels and clarify the output

Define what you are producing and keep unlike response types visible before you start coding.

Before the team codes anything, decide what kind of submission exercise this is, what the output has to do, and which response channels must stay visible as distinct evidence streams.

Write down what the synthesis needs to produce. That may be a consultation report, board paper, thematic findings section, policy briefing, recommendation pack, or draft chapter input.

Then map the response channels feeding that output. Keep surveys, letters, petitions, workshops, meetings, and other channels visible as distinct types rather than collapsing everything into one blurred pool.

At this stage, answer:

  • What output has to be produced?
  • Which response channels feed it?
  • What level of traceability will reviewers expect?
  • Which subgroups matter for comparison?
  • What weighting or caveat notes will need to be stated explicitly?

This step protects the logic of the whole workflow. Once very different response types are mixed without metadata, it becomes much harder to explain what the evidence actually shows.

Step 2

Build the master catalogue and source IDs

Create the control layer that lets every coded point travel back to a source.

Set up a master catalogue before full coding begins. This is the intake register for the live source base.

A compact register is easier to scan when the team is pressure-testing intake logic. The example below shows the kind of fields that keep unlike submission types visible and traceable from the start.

If the workflow will later need quote-level referencing, plan that now as well. A good setup reserves fields for excerpt references, page numbers, transcript numbers, or paragraph locators so the charting stage does not have to retrofit them.

This is also where stable naming rules, version control, and a research diary start to matter. They are not extra admin. They are the audit trail.

If you want to see this logic in live public-consultation work, the Local Government White Paper case study shows how quote fields, source locators, and taxonomy controls support later drafting and review.

Sample intake register fields

FieldWhy it mattersExample value
Source IDGives every record a stable reference for coding, charting, and QAWS-LET-014
Response channelKeeps unlike evidence streams visible instead of blended too earlyLetter
Respondent labelShows who submitted the response or how it should be groupedProvincial business chamber
Date receivedSupports audit trail checks and reporting windows2026-02-11
Organisation type or subgroupMakes later comparison possibleBusiness association
GeographyPreserves comparison by place when it mattersGauteng
File locationReturns the reviewer to the source quickly/submissions/letters/WS-LET-014.pdf
Locator rulePrepares the workflow for quote-level trace-back laterPage and paragraph
StatusShows where the record sits in the workflowCleaned, ready for coding
Step 3

Clean the submission set before coding

Remove weak records and obvious noise before they distort the coding work.

Before detailed analysis, clean the response set.

Check for:

  • blank responses
  • duplicate responses
  • campaign or template responses where relevant
  • broken files or incomplete records
  • submissions logged twice under different names
  • source files missing the metadata needed for later comparison

This step is also the moment to record the cleaning choices. If you remove blanks, duplicates, or invalid entries, keep that decision in the working notes and summary counts.

A controlled submission set saves time later because the team is not spending its coding hours on records that should have been filtered out at intake.

Step 4

Read through the material and write early memos

Use familiarisation and memoing to shape the coding framework before full coding begins.

Do not jump from intake straight into a giant coding pass.

Read a cross-section of the material first. Aim to cover different response channels, respondent types, and apparent positions. During that read-through, keep short memo notes on:

  • recurring issues
  • outlier concerns
  • differences by respondent group
  • language patterns worth preserving
  • methodological concerns that could affect interpretation
  • possible headings for the coding framework

This memoing stage matters because qualitative extraction is iterative rather than one-pass. Teams often need to move backwards and forwards between familiarisation, coding, and refinement before the framework settles.

The output of this step is not a finished theme set. It is a sharper working sense of what the submission set is really saying and what details need to be preserved later.

Step 5

Draft the initial coding framework and codebook

Turn the early read-through into a framework the team can apply consistently.

Now build the first working coding framework.

In submission work, the codebook is not just an analysis tool. It is part of the later reporting defence.

Start by listing the main topical categories that the output needs to answer. Then define the codebook fields for each code:

  • code name
  • short definition
  • what belongs in the code
  • what does not belong in the code
  • any subgroup notes or qualifiers
  • whether the code is descriptive, evaluative, or action-oriented

In consultation or submission work, a practical framework usually combines a priori structure with room for inductive refinement. That means the reporting questions and consultation themes may shape the starting columns, while the material itself still expands or sharpens the coding logic.

Keep version control tight here. Once the codebook starts changing, record what changed and why.

Step 6

Pilot the coding and align the team

Test the framework on a sample before the full coding pass.

Run the framework on a sample first.

Pick a small set of responses that covers different channels or respondent types. Code them, compare the results, and check for:

  • overlap between codes
  • codes that are too broad or too narrow
  • missing categories
  • inconsistent treatment of the same issue
  • uncertainty around what counts as enough evidence for a code

This is where inter-coder and intra-coder consistency checks help. The goal is not perfect statistical neatness. The goal is to expose drift early enough that the codebook can still be corrected.

At the end of this step, the team should have one agreed codebook version, a clearer sense of the matrix columns, and fewer surprises waiting inside the full submission set.

Step 7

Code the full set with traceable references

Apply the framework to the whole submission set while keeping the evidence trail intact.

Once the framework is stable enough, code the full set.

For each coded point, preserve the evidence reference that lets someone move back to the source later. Depending on the material, that may include transcript number, page, line, paragraph, response ID, or another source locator.

Also preserve contextual fields that matter later, such as response channel, geography, organisation type, stakeholder group, or any methodological note that helps explain the evidence.

This is where traceability becomes real. The coded record should not just say what theme appeared. It should say where it appeared and under what conditions.

The UNICEF Zambia workflow is a strong example of how quote-per-claim guardrails and structured traceability make later reporting faster and safer.

Step 8

Chart the data into a matrix and compare across cases

Turn coded material into a matrix that supports cross-case and subgroup comparison.

This is the core Framework Method move.

The matrix is where the submission set stops being a pile of responses and starts becoming a usable comparison structure. If the matrix is weak, the later draft will either flatten the differences or over-rely on vivid examples.

Build the matrix with rows as cases and columns as codes or themes. Then chart the coded material into the relevant cells using concise summaries, source references, and quote excerpts where needed.

The matrix is where the synthesis becomes usable. It lets the team compare:

  • one respondent group against another
  • one geography against another
  • one response channel against another
  • recurring concerns across the full dataset
  • outlier positions worth keeping visible

Good charting is not copy-paste dumping. It is concise, source-linked summarising that preserves enough context to stop the evidence from flattening.

This same matrix logic also sits inside the UNICEF Palestine workflow, where coded issues, quotes, service-access patterns, and theory-of-change layers had to work together for fast drafting.

Mini framework matrix row

CaseChannelCodeCharted summaryLocator
WS-LET-014LetterService access barrierBusiness chamber says rural applicants miss appointments because transport is expensive and routes are unreliable; requests mobile outreach support.p. 3, paras. 2-3
SUR-088SurveyService access barrierRespondent reports repeated missed visits because taxi costs and travel time make attendance unrealistic.Q14 comment 088
Step 9

Draft findings and QA the evidence trail

Write from the matrix, then check the route back to source before anything is signed off.

Drafting should happen from the matrix and coded evidence base, not from raw files.

At this stage, each finding or recommendation should be able to answer four questions:

  • What does the pattern appear to be?
  • Which coded evidence supports it?
  • Which subgroup, channel, or context conditions matter?
  • Where can a reviewer go to verify the point?

Run one final QA pass before the draft goes out. Check:

  • whether the finding still matches the coded evidence
  • whether any weighting or caveat note needs to be stated explicitly
  • whether subgroup differences are described accurately
  • whether the source locators still work
  • whether quotations or paraphrases are still attached to the right records

This is where the workflow protects the report. If a reviewer challenges a claim, the team should be able to move straight back to the coded record and source reference rather than reopen the whole submission set.

If your team is still fixing these problems later in the writing cycle, How to Build Evidence Workflows for Reporting and Accountability is the broader systems view behind the same issue.

FAQ

What is the Framework Method in qualitative analysis?

It is a structured approach that organises qualitative material into a matrix with rows as cases and columns as codes or themes, making cross-case comparison easier without losing the line back to source material Gale et al..

When is the Framework Method a good fit for stakeholder submissions?

It is a strong fit when the team needs to compare many responses across themes, subgroups, or channels and still preserve source traceability for drafting and review, especially when the work needs a matrix-backed audit trail Gale et al. ACI best-practice guide.

Should survey responses, letters, petitions, and meetings be analysed together?

They can be read against each other, but they should be recorded and reported as distinct response channels first. Mixing them too early weakens interpretation and later reporting Office for Students guide Nottinghamshire consultation guide.

What should the audit trail include?

At minimum, keep a master catalogue, stable source IDs, file naming rules, version notes, a controlled codebook, memo notes, and source locators for coded excerpts or quotes ACI best-practice guide Cochrane Handbook.

When should AI be added to this workflow?

After the catalogue, codebook, source IDs, and matrix logic are stable. AI works much better when it sits on top of an organised evidence base rather than a messy submission set, because trustworthy qualitative analysis depends on clear data-management steps and preserved context before tools are layered on top Bingham Cochrane Handbook.

A better stakeholder-submissions workflow starts before the report

A strong stakeholder-submissions workflow starts long before the drafting stage. By the time the report begins, the team should already have a catalogue, a stable codebook, a matrix, and a checked route back to source.

That is what makes the final output faster to write, easier to review, and safer to defend.

Sources used in this guide

Methodology and guidance
Using the framework method for the analysis of qualitative data in multi-disciplinary health research

Framework Method background and matrix structure.

Read source
A best practice guide to qualitative analysis of research to inform healthcare improvement, re-design, implementation and translation

Audit trail controls including catalogues, codebooks, naming, and diaries.

Read source
Approach to the results of the NSS: Analysis of consultation responses

Cleaning, coding framework, coding checks, subgroup analysis, and weighting caveats.

Read source
Guide 9 - Analysis and writing up the consultation

Advice on keeping response channels distinct during consultation analysis.

Read source
Cochrane Handbook Chapter 21: Qualitative evidence

Iterative extraction and preserving contextual and methodological detail.

Read source
From Data Management to Actionable Findings: A Five-Phase Process of Qualitative Data Analysis

Five-phase backbone and memoing logic.

Read source

Data Synthesis

Combine and interpret inputs from multiple sources into integrated findings.

Book a 20-minute scoping callView case studies
Share this article
Service fit

Relevant service fit

This article connects to service work that turns large submission sets into traceable matrices, findings, and drafting inputs.

Data Synthesis

Combine and interpret inputs from multiple sources into integrated findings.

Delivery examples

Related case studies

These examples show how structured evidence handling holds up across coding, synthesis, drafting, and review.

Related reading

Next reads

Read the traceability guide beside this method and the report-writing guide that follows it.

Need help with a similar problem?

If you already know the workflow is breaking, the next step is to map the current chain, identify the weak points, and decide what needs structure, what needs method discipline, and what needs system support.