UNICEF child poverty study evidence workflow for female-headed households in Zambia

For a UNICEF child poverty study in Zambia, I built a schema-first workflow that converted 120 narrative case studies into a clean, query-ready evidence base. The system combined spreadsheets, AI-assisted coding, review guardrails, and a plain-English insight layer so non-technical writers could produce reporting-ready tables and summaries with traceable evidence.

Sector

Research

Client type

Nonprofit

Primary output

A governed, spreadsheet-first evidence workflow with AI-assisted coding, traceable reporting outputs, and a self-serve insight interface.

Commercial result

Cut analysis time from 60-90 minutes per case to about 15 minutes while improving consistency, traceability, and reporting speed.

Project overview

What the project was solving

A qualitative research team needed to turn 120 narrative case studies on female-headed households in rural Zambia into a consistent evidence base for reporting. The existing process was slow, hard to standardise across themes, and difficult to defend in review when evidence links were not clear.

Summary

For a UNICEF child poverty study in Zambia, I built a schema-first workflow that converted 120 narrative case studies into a clean, query-ready evidence base. The system combined spreadsheets, AI-assisted coding, review guardrails, and a plain-English insight layer so non-technical writers could produce reporting-ready tables and summaries with traceable evidence.

Context

The project focused on female-headed households in Mongu and Kasama, where the research team had rich narrative material but needed a faster way to extract, structure, and reuse evidence across a multi-theme study. The engagement ran for three months and had to support credible reporting inside a review-heavy environment.

Challenge

The work had to speed up analysis without weakening evidence quality. Each case needed to map to the same schema, outputs had to stay traceable back to the source material, and the workflow had to be practical for a spreadsheet-based team rather than built around specialist tools only analysts could run.

Scope and delivery

What needed to happen and what was built to support it

A governed, spreadsheet-first evidence workflow with AI-assisted coding, traceable reporting outputs, and a self-serve insight interface.

What needed to happen

  • A shared schema and data dictionary that could standardise extraction across ten themes
  • A coding workflow that captured only supported claims, used nulls for missing information, and flagged ambiguity for review
  • A spreadsheet-based evidence base that non-technical team members could query, review, and reuse
  • Reporting outputs that matched the study format and could be traced back to case ID, quote, and cell range
  • A handover process with SOPs and training so the team could run the workflow independently

What I built or delivered

  • A ten-theme schema with defined fields, naming rules, formats, and guidance for missing values
  • An AI-assisted case coding workflow with quote-per-claim guardrails and human review points
  • A clean evidence database built in Excel and Google Sheets
  • Spreadsheet logic for rollups, location comparisons, ranked lists, and filtered analysis
  • A reporting-focused Custom GPT for plain-English querying, summary generation, table outputs, and formula visibility
  • QA checks, reconciliation checks, traceability rules, versioning logic, and access-control guidance
  • A simple SOP and two compact training sessions for handover
Execution

How the work moved from raw inputs to a usable output

The work had to speed up analysis without weakening evidence quality. Each case needed to map to the same schema, outputs had to stay traceable back to the source material, and the workflow had to be practical for a spreadsheet-based team rather than built around specialist tools only analysts could run.

Process

  1. 01

    Defined the schema, field rules, and data dictionary for the full thematic scope

  2. 02

    Processed cases one at a time through an AI-assisted extraction workflow with explicit guardrails

  3. 03

    Flattened structured outputs into Excel and Google Sheets to create a query-ready evidence base

  4. 04

    Added spreadsheet formulas and rollups to support pattern detection and reporting tables

  5. 05

    Built a self-serve insight interface on top of the evidence base for report writers

  6. 06

    Added governance checks for outliers, structure validation, reconciliation, and source traceability

  7. 07

    Trained the team to read the data, verify outputs, and use the system in live reporting

Outputs

  • Standardised evidence base in Excel and Google Sheets
  • Data dictionary and schema documentation
  • AI-assisted qualitative coding workflow
  • Reporting-ready tables, summaries, and draft-ready sections
  • Plain-English Custom GPT insight interface
  • SOP and training materials for team handover
Outcome and impact

What changed and what the work made possible

Cut analysis time from 60-90 minutes per case to about 15 minutes while improving consistency, traceability, and reporting speed.

Commercial result

  • Reduced processing time per case from 60-90 minutes to about 15 minutes
  • Saved an estimated 120 analyst hours across the full dataset
  • Improved consistency across themes and locations with less interpretation drift
  • Enabled non-technical writers to generate and verify reporting-ready summaries on demand
  • Improved auditability by linking claims back to source excerpts and exact spreadsheet locations
Related reading

Methods and notes connected to this project type

These articles unpack the systems, evidence workflows, and reporting logic behind the delivery.

Let's talk

Need a similar system or workflow?

If your team is dealing with the same kind of information, reporting, or evidence bottleneck, send a short brief and I can assess fit quickly.