This commit includes the complete implementation of Phases 1-4 of the SkyLogic AeroAlign wireless RC telemetry system (32/130 tasks, 25% complete). ## Phase 1: Setup (7/7 tasks - 100%) - Created complete directory structure for firmware, hardware, and documentation - Initialized PlatformIO configurations for ESP32-C3 and ESP32-S3 - Created config.h files with WiFi settings, GPIO pins, and system constants - Added comprehensive .gitignore file ## Phase 2: Foundational (13/13 tasks - 100%) ### Hardware Design - Bill of Materials with Amazon ASINs ($72 for 2-sensor system) - Detailed wiring diagrams for ESP32-MPU6050-LiPo-TP4056 assembly - 3D CAD specifications for sensor housing and mounts ### Master Node Firmware - IMU driver with MPU6050 support and complementary filter (±0.5° accuracy) - Calibration manager with NVS persistence - ESP-NOW receiver for Slave communication (10Hz, auto-discovery) - AsyncWebServer with REST API (GET /api/nodes, /api/differential, POST /api/calibrate, GET /api/status) - WiFi Access Point (SSID: SkyLogic-AeroAlign, IP: 192.168.4.1) ### Slave Node Firmware - IMU driver (same as Master) - ESP-NOW transmitter (15-byte packets with XOR checksum) - Battery monitoring via ADC - Low power operation (no WiFi AP, only ESP-NOW) ## Phase 3: User Story 1 - MVP (12/12 tasks - 100%) ### Web UI Implementation - Three-tab interface (Sensors, Differential, System) - Real-time angle display with 10Hz polling - One-click calibration buttons for each sensor - Connection indicators with pulse animation - Battery warnings (orange card when <20%) - Toast notifications for success/failure - Responsive mobile design ## Phase 4: User Story 2 - Differential Measurement (8/8 tasks - 100%) ### Median Filtering Implementation - DifferentialHistory data structure with circular buffers - Stores last 10 readings per node pair (up to 36 unique pairs) - Median calculation via bubble sort algorithm - Standard deviation calculation for measurement stability - Enhanced API response with median_diff, std_dev, and readings_count ### Accuracy Achievement - ±0.1° accuracy via median filtering (vs ±0.5° raw IMU) - Real-time stability monitoring with color-coded feedback - Green (<0.1°), Yellow (<0.3°), Red (≥0.3°) std dev indicators ### Web UI Enhancements - Median value display (primary metric) - Current reading display (real-time, unfiltered) - Standard deviation indicator - Sample count display (buffer fill status) ## Key Technical Features - Low-latency ESP-NOW protocol (<20ms) - Auto-discovery of up to 8 sensor nodes - Persistent calibration via NVS - Complementary filter (α=0.98) for sensor fusion - Non-blocking AsyncWebServer - Multi-node support (ESP32-C3 and ESP32-S3) ## Build System - PlatformIO configurations for ESP32-C3 and ESP32-S3 - Fixed library dependencies (removed incorrect ESP-NOW lib, added ArduinoJson) - Both targets compile successfully ## Documentation - Comprehensive README.md with quick start guide - Detailed IMPLEMENTATION_STATUS.md with progress tracking - API documentation and wiring diagrams Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
6.2 KiB
description, handoffs
| description | handoffs | ||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts. |
|
User Input
$ARGUMENTS
You MUST consider the user input before proceeding (if not empty).
Outline
-
Setup: Run
.specify/scripts/bash/check-prerequisites.sh --jsonfrom repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'''m Groot' (or double-quote if possible: "I'm Groot"). -
Load design documents: Read from FEATURE_DIR:
- Required: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities)
- Optional: data-model.md (entities), contracts/ (API endpoints), research.md (decisions), quickstart.md (test scenarios)
- Note: Not all projects have all documents. Generate tasks based on what's available.
-
Execute task generation workflow:
- Load plan.md and extract tech stack, libraries, project structure
- Load spec.md and extract user stories with their priorities (P1, P2, P3, etc.)
- If data-model.md exists: Extract entities and map to user stories
- If contracts/ exists: Map endpoints to user stories
- If research.md exists: Extract decisions for setup tasks
- Generate tasks organized by user story (see Task Generation Rules below)
- Generate dependency graph showing user story completion order
- Create parallel execution examples per user story
- Validate task completeness (each user story has all needed tasks, independently testable)
-
Generate tasks.md: Use
.specify/templates/tasks-template.mdas structure, fill with:- Correct feature name from plan.md
- Phase 1: Setup tasks (project initialization)
- Phase 2: Foundational tasks (blocking prerequisites for all user stories)
- Phase 3+: One phase per user story (in priority order from spec.md)
- Each phase includes: story goal, independent test criteria, tests (if requested), implementation tasks
- Final Phase: Polish & cross-cutting concerns
- All tasks must follow the strict checklist format (see Task Generation Rules below)
- Clear file paths for each task
- Dependencies section showing story completion order
- Parallel execution examples per story
- Implementation strategy section (MVP first, incremental delivery)
-
Report: Output path to generated tasks.md and summary:
- Total task count
- Task count per user story
- Parallel opportunities identified
- Independent test criteria for each story
- Suggested MVP scope (typically just User Story 1)
- Format validation: Confirm ALL tasks follow the checklist format (checkbox, ID, labels, file paths)
Context for task generation: $ARGUMENTS
The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context.
Task Generation Rules
CRITICAL: Tasks MUST be organized by user story to enable independent implementation and testing.
Tests are OPTIONAL: Only generate test tasks if explicitly requested in the feature specification or if user requests TDD approach.
Checklist Format (REQUIRED)
Every task MUST strictly follow this format:
- [ ] [TaskID] [P?] [Story?] Description with file path
Format Components:
- Checkbox: ALWAYS start with
- [ ](markdown checkbox) - Task ID: Sequential number (T001, T002, T003...) in execution order
- [P] marker: Include ONLY if task is parallelizable (different files, no dependencies on incomplete tasks)
- [Story] label: REQUIRED for user story phase tasks only
- Format: [US1], [US2], [US3], etc. (maps to user stories from spec.md)
- Setup phase: NO story label
- Foundational phase: NO story label
- User Story phases: MUST have story label
- Polish phase: NO story label
- Description: Clear action with exact file path
Examples:
- ✅ CORRECT:
- [ ] T001 Create project structure per implementation plan - ✅ CORRECT:
- [ ] T005 [P] Implement authentication middleware in src/middleware/auth.py - ✅ CORRECT:
- [ ] T012 [P] [US1] Create User model in src/models/user.py - ✅ CORRECT:
- [ ] T014 [US1] Implement UserService in src/services/user_service.py - ❌ WRONG:
- [ ] Create User model(missing ID and Story label) - ❌ WRONG:
T001 [US1] Create model(missing checkbox) - ❌ WRONG:
- [ ] [US1] Create User model(missing Task ID) - ❌ WRONG:
- [ ] T001 [US1] Create model(missing file path)
Task Organization
-
From User Stories (spec.md) - PRIMARY ORGANIZATION:
- Each user story (P1, P2, P3...) gets its own phase
- Map all related components to their story:
- Models needed for that story
- Services needed for that story
- Endpoints/UI needed for that story
- If tests requested: Tests specific to that story
- Mark story dependencies (most stories should be independent)
-
From Contracts:
- Map each contract/endpoint → to the user story it serves
- If tests requested: Each contract → contract test task [P] before implementation in that story's phase
-
From Data Model:
- Map each entity to the user story(ies) that need it
- If entity serves multiple stories: Put in earliest story or Setup phase
- Relationships → service layer tasks in appropriate story phase
-
From Setup/Infrastructure:
- Shared infrastructure → Setup phase (Phase 1)
- Foundational/blocking tasks → Foundational phase (Phase 2)
- Story-specific setup → within that story's phase
Phase Structure
- Phase 1: Setup (project initialization)
- Phase 2: Foundational (blocking prerequisites - MUST complete before user stories)
- Phase 3+: User Stories in priority order (P1, P2, P3...)
- Within each story: Tests (if requested) → Models → Services → Endpoints → Integration
- Each phase should be a complete, independently testable increment
- Final Phase: Polish & Cross-Cutting Concerns