Utility Scripts
This document describes the utility scripts available in the Tracker GraphQL application for maintenance, testing, and troubleshooting.
Overview
The application includes several utility scripts located in the scripts/ directory that help with common tasks:
- Database Maintenance: Scripts for checking and refreshing database tables
- User Management: Scripts for creating and managing users
- Cache Management: Scripts for flushing the Redis cache
- Data Listing: Scripts for listing trackers and production runs
Database Maintenance Scripts
check_aggregated_tables.py
This script checks the status of the TimescaleDB continuous aggregates and materialized views.
python scripts/check_aggregated_tables.py
Output example:
Checking location_history_hourly...
- Last refresh: 2025-04-17 13:45:00+01
- Row count: 12,543
Checking location_history_daily...
- Last refresh: 2025-04-17 00:00:00+01
- Row count: 1,245
Checking location_history materialized view...
- Last refresh: 2025-04-17 14:00:00+01
- Row count: 13,788
refresh_continuous_aggregates.py
This script manually refreshes the TimescaleDB continuous aggregates and materialized views.
python scripts/refresh_continuous_aggregates.py
You can specify a time range to refresh:
python scripts/refresh_continuous_aggregates.py --start "2025-04-10" --end "2025-04-17"
User Management Scripts
create_user.py
This script creates a new user in the database.
python scripts/create_user.py --email user@example.com --password securepassword --name "User Name" --role admin
Available roles:
admin: Full access to all featuresuser: Standard user accessviewer: Read-only access
Cache Management Scripts
flush_cache.py
This script flushes the Redis cache, which can be useful when troubleshooting or after making database changes.
python scripts/flush_cache.py
You can flush specific cache keys:
python scripts/flush_cache.py --pattern "location_history:*"
Data Import Scripts
import_trackers.py
This script imports tracker data from a CSV file into the trackers table.
python scripts/import_trackers.py <production_run_id> --csv-path path/to/trackers.csv
The script:
- Validates that the production run exists
- Reads tracker data from the CSV file
- Handles flexible column naming in the CSV
- Skips rows with missing required fields
- Avoids creating duplicate trackers
- Provides detailed logging of the import process
Example usage:
# Import trackers for production run ID 5 using the default CSV file
python scripts/import_trackers.py 5
# Import trackers for production run ID 10 using a specific CSV file
python scripts/import_trackers.py 10 --csv-path data/new-trackers.csv
Data Listing Scripts
list_trackers.py
This script lists all trackers in the database with their current status.
python scripts/list_trackers.py
You can filter by client:
python scripts/list_trackers.py --client-id 123
Output example:
ID | Name | Status | Last Report | Client
----+-------------+-------------+---------------------+--------
1 | Tracker-001 | in_transit | 2025-04-17 13:45:00 | Client A
2 | Tracker-002 | in_storage | 2025-04-17 12:30:00 | Client A
3 | Tracker-003 | delivered | 2025-04-16 09:15:00 | Client B
list_production_runs.py
This script lists all production runs in the database.
python scripts/list_production_runs.py
You can filter by client:
python scripts/list_production_runs.py --client-id 123
Output example:
ID | Name | Start Date | End Date | Client | Trackers
----+-------------+-------------+-------------+-----------+--------
1 | Run-2025-01 | 2025-01-15 | 2025-02-28 | Client A | 5
2 | Run-2025-02 | 2025-03-01 | 2025-04-15 | Client A | 8
3 | Run-2025-03 | 2025-03-15 | 2025-04-30 | Client B | 3
Mock Data Scripts
setup_mock_data.py
This script sets up mock data for testing and development purposes.
python scripts/setup_mock_data.py
This script:
- Creates mock clients
- Creates mock trackers
- Creates mock storage and delivery locations
- Creates mock production runs
- Associates trackers with production runs
- Generates mock location reports
setup_mock_data_direct.py
This script sets up mock data directly in the database, bypassing the GraphQL API.
python scripts/setup_mock_data_direct.py
This is faster than setup_mock_data.py but doesn't test the GraphQL API.
Running Scripts in Production
When running scripts in a production environment, it's recommended to:
- Make a backup of the database first
- Run the script in a test environment first
- Use the
--dry-runflag if available to see what changes would be made - Run the script during off-peak hours
Example:
# Make a backup
pg_dump -U postgres -d tracker > backup_before_script.sql
# Run the script with dry run
python scripts/flush_cache.py --dry-run
# If everything looks good, run the script
python scripts/flush_cache.py
Creating New Scripts
When creating new utility scripts, follow these guidelines:
- Place the script in the
scripts/directory - Use the same database connection method as existing scripts
- Include command-line arguments for flexibility
- Add a
--dry-runflag for safety - Include helpful output and error messages
- Document the script in this file