Back to Case Studies

Child Poverty Evidence Workflow for a UNICEF Report Project in Zambia

Romanos BoraineZambia

I built a spreadsheet-first evidence workflow for the primary contractor on a UNICEF report project in Zambia, turning 120 narrative case studies into traceable reporting outputs and cutting analysis time sharply.

A primary contractor on a UNICEF child poverty report project in Zambia needed a faster, more defensible route from narrative case studies to reporting-ready evidence. I built a schema-first spreadsheet workflow that standardised 120 cases, added AI-assisted coding with guardrails, and gave non-technical writers a plain-English way to query the evidence base.

What this project delivered

Spreadsheet-ready evidence base covering 120 case studies

Ten-theme schema and data dictionary for standardised extraction

AI-assisted coding workflow with quote-per-claim guardrails

Reporting-ready tables, summaries, and draft-ready outputs

SOP plus two compact handover training sessions

Time frame

Three-month engagement

Status

Workflow delivered and handed over

Main deliverables

Evidence database, data dictionary, AI-assisted coding workflow, reporting tables, handover SOP

Main result

Cut analysis time to about 15 minutes per case and saved an estimated 120 analyst hours across the study.

Project overview

The problem

A primary contractor on a UNICEF child poverty report project in Zambia needed to turn 120 narrative case studies on female-headed households into reporting-ready evidence without losing consistency or traceability. The existing process was slow, theme handling varied from analyst to analyst, and the team needed outputs that non-technical writers could use under review. The fix had to work in spreadsheets, not in a specialist setup only analysts could run, and it had to leave the team with a handover-ready workflow they could keep using after delivery.

Context

The project supported a UNICEF report focused on female-headed households in Mongu and Kasama, where the research team had rich narrative material but needed a faster way to extract, structure, and reuse evidence across a multi-theme study. The engagement ran for three months and had to support credible reporting inside a review-heavy environment.

Constraints

The work had to speed up analysis without weakening evidence quality. Each case needed to map to the same schema, outputs had to stay traceable back to the source material, and the workflow had to be practical for a spreadsheet-based team rather than built around specialist tools only analysts could run.

What the team needed

  • A shared schema and data dictionary that could standardise extraction across ten themes
  • A coding workflow that captured only supported claims, used nulls for missing information, and flagged ambiguity for review
  • A spreadsheet-based evidence base that non-technical team members could query, review, and reuse
  • Reporting outputs that matched the study format and could be traced back to case ID, quote, and cell range
  • A handover process with SOPs and training so the team could run the workflow independently
Build

What I built

A governed, spreadsheet-first evidence workflow with AI-assisted coding, traceable reporting outputs, and a self-serve insight interface.

Named systems and workflow pieces

  • A ten-theme schema with defined fields, naming rules, formats, and guidance for missing values
  • An AI-assisted case coding workflow with quote-per-claim guardrails and human review points
  • A clean evidence database built in Excel and Google Sheets
  • Spreadsheet logic for rollups, location comparisons, ranked lists, and filtered analysis
  • A reporting-focused Custom GPT for plain-English querying, summary generation, table outputs, and formula visibility
  • QA checks, reconciliation checks, traceability rules, versioning logic, and access-control guidance
  • A simple SOP and two compact training sessions for handover
Visual proof

One spreadsheet-first workflow standardised 120 case studies

The system moved from schema definition to coding, querying, and handover in one controlled chain instead of leaving analysts to improvise theme handling case by case.

Proof chain

Narrative case study to coded record to reporting table to handover-ready workflow

Execution

How it worked

The workflow moved from raw material to usable output through a short sequence of controlled steps.

Process

  1. 01

    Defined the ten-theme schema, field rules, and data dictionary for the full study scope

  2. 02

    Processed cases one at a time through an AI-assisted extraction workflow with explicit guardrails

  3. 03

    Flattened structured outputs into Excel and Google Sheets to create a query-ready evidence base

  4. 04

    Added spreadsheet formulas and rollups for comparisons, ranked lists, and reporting tables

  5. 05

    Built a plain-English querying layer for report writers on top of the evidence base

  6. 06

    Added governance checks and handover guidance so the team could keep using the workflow

Deliverables

Outputs

These were the named assets, dated deliverables, and working materials left behind by the project.

Working outputs

  • Standardised evidence base in Excel and Google Sheets covering 120 case studies
  • Ten-theme data dictionary and schema documentation
  • AI-assisted qualitative coding workflow with review guardrails
  • Reporting-ready tables, summaries, and draft-ready sections
  • Plain-English Custom GPT insight interface
  • SOP and training materials for team handover
Outcome

Result

Cut analysis time to about 15 minutes per case and saved an estimated 120 analyst hours across the study.

Main result

  • Cut processing time from 60 to 90 minutes per case to about 15 minutes
  • Saved an estimated 120 analyst hours across the full dataset
  • Improved consistency across themes and locations with less interpretation drift
  • Made it easier for non-technical writers to generate and verify reporting-ready summaries on demand
  • Improved auditability by linking claims back to source excerpts and exact spreadsheet locations
Key learning

AI works best in evidence-heavy research when the system is built around schema discipline, traceability, and review guardrails rather than speed alone.

Qualification

Best fit

These are the situations where this kind of evidence workflow tends to be the strongest fit.

Who this is best for

  • Research teams working with large volumes of narrative case material
  • Evidence-heavy studies that need spreadsheet-first workflows rather than specialist software
  • Projects that need traceability from coded claim to reporting table
  • Teams under pressure to speed up qualitative coding without weakening review
  • Assignments that need a usable handover SOP as well as delivery outputs
Share this case study
Relevant services

Service stack connected to this case study

This case study sits inside the same delivery work, service logic, and practical outcomes shown across the site.

Database Architecture

Design practical database systems so information can be captured, organised, and used more effectively.

Custom AI Building

Build custom AI knowledge bases and tools around your own data environment.

Data Synthesis

Combine and interpret inputs from multiple sources into integrated findings.

Report Writing

Develop clear, structured outputs from evidence, data, and synthesised information.

Insight Generation

Turn raw data and synthesis into practical insights for decisions, planning, and strategy.

Similar case studies

Similar case studies

These are the closest delivery examples on the site, based on the same service mix, adjacent workflow logic, or a very similar problem shape.

Let's talk

Need a similar workflow?

If your team is dealing with the same kind of information, reporting, or evidence bottleneck, send a short brief and I can assess fit quickly.