
Current focus
Podcast 01
Loading audio...
African Pest Detector
Designing an AI crop diagnosis tool for low-resource farming environments
This project is a low-cost AI system for detecting agricultural pests affecting small-scale farmers in Africa, starting with Nigeria. The goal is practical rather than academic: help farmers identify crop diseases and pests quickly, provide clear and actionable advice, and make the system work on low-end phones with intermittent connectivity.

Challenge
Farmers often misidentify pests, apply the wrong treatment, and lose crops unnecessarily, while most AI tools assume strong connectivity, newer phones, and users comfortable with technical interfaces.
Outcome
The MVP defines a realistic path to a mobile-first workflow where a farmer captures a crop image, receives a pest or disease diagnosis, and gets short, usable advice designed for low-literacy and low-connectivity conditions.
01
Discovery
The project was framed around a real agricultural problem in underserved environments: farmers need faster, clearer decisions, but available datasets, interfaces, and deployment assumptions rarely reflect African field conditions.
02
Build
The technical strategy combines lightweight vision models such as MobileNet or EfficientNet with a local LLM layer that converts predictions into simple explanations and step-by-step actions for non-expert users.
03
Launch
The MVP stays deliberately narrow: capture or upload an image, classify the pest or disease, map the result to a label, and return concise farmer-friendly advice without over-engineering the system.
Notes
The core objective is to build a low-cost AI system for detecting agricultural pests affecting small-scale farmers in Africa, starting with Nigeria. The goal is not a research demo. It is a practical mobile tool that helps farmers identify crop diseases and pests quickly, understand what is happening, and decide what to do next.
The product is mobile first. A farmer takes a photo of a crop, the system detects a likely pest or disease condition, and the app returns a diagnosis, a simple explanation, and recommended actions. The value is not only in classification accuracy, but in turning raw machine learning output into usable farming decisions.
"The differentiation is not just pest detection. It is turning raw ML output into clear, actionable decisions for farmers operating in low-resource environments."
The system architecture is intentionally simple. The image recognition layer uses lightweight computer vision models such as MobileNet or EfficientNet through transfer learning, with plant images as input and pest or disease labels as output. These models are prioritized because they are mobile compatible, edge friendly, and more realistic for low-bandwidth or offline use than heavier alternatives.
On top of the classifier sits a lightweight reasoning layer using a local LLM, potentially based on LLaMA-style models. The purpose of this layer is not to generate complex agronomic analysis. It translates the prediction into plain language explanations and short step-by-step recommendations that a non-expert can follow. Clarity matters more than technical sophistication.
The initial data strategy starts with PlantVillage as a workable baseline, while recognizing an important limitation: the dataset is not Africa specific. That creates a real risk around localization and field accuracy. A stronger long-term direction would require region-specific pest imagery and localized agricultural knowledge tailored to African crops, climates, and treatment realities.
The frontend is designed as a React Native mobile app with camera-based input and a simple interface for fast interaction. Accessibility is central to the concept. Users may have limited literacy, may prefer local languages, and may benefit from voice-based output rather than text-heavy explanations. This makes interface design just as important as model choice.
The project is defined by constraints more than features. Connectivity may be intermittent or absent. Devices may be low-cost Android phones with limited memory, battery, and compute. Time is constrained as well: with roughly one hour per week available for development, the MVP must stay focused and avoid unnecessary complexity.
That is why the current scope is intentionally narrow: photo to detection to advice. No complex real-time pipelines, no oversized backend, and no attempt to solve the entire agricultural stack at once. The broader directions such as solar-powered IoT sensing, satellite prediction, smart spraying, or farmer chatbots remain future possibilities rather than launch requirements.
What makes the concept compelling is that it aligns open-source models, low-cost infrastructure, and edge-friendly deployment with a real need in an underserved environment. The strongest part of the project is its constraint-driven design. The hardest parts are equally clear: dataset localization, real-world model accuracy, and actual distribution to farmers.
Q&A
What is this project?
It is a mobile-first AI system for helping small-scale farmers in Africa identify crop pests and diseases, starting with Nigeria, and receive clear recommendations on what to do next.
What problem does it solve?
It addresses a practical farming problem: pests and diseases are often misidentified, which leads to incorrect treatments, unnecessary crop loss, and slower decisions in the field.
How does the MVP work?
A farmer captures or uploads an image, a lightweight vision model classifies the likely pest or disease, and the system returns a short explanation plus simple action steps generated through a local reasoning layer.
Why use lightweight models like MobileNet or EfficientNet?
They are better suited to low-end Android phones, edge deployment, and offline or low-bandwidth scenarios than heavier models that require more memory, power, and connectivity.
Why include an LLM at all?
The LLM is there to turn model predictions into usable advice. The goal is not advanced reasoning for its own sake, but clearer explanations and more actionable recommendations for non-expert users.
What are the biggest risks in the project?
The main risks are dataset localization, real-world field accuracy, and distribution. PlantVillage is a useful start, but it does not fully represent African farming conditions or region-specific pest patterns.