Overview
LogFleet is designed for high-throughput edge deployments. This guide covers hardware requirements, performance benchmarks, capacity planning, and tuning recommendations for different scales.All benchmarks were conducted on standard hardware configurations. Your results may vary based on log complexity, network conditions, and workload patterns.
Hardware Requirements
Minimum Requirements (Development/Testing)
For local development and small-scale testing:| Component | Specification |
|---|---|
| CPU | 2 cores |
| RAM | 4 GB |
| Storage | 20 GB SSD |
| Network | 10 Mbps |
Recommended (Single Edge Location)
For production single-location deployments handling typical retail/IoT workloads:| Component | Specification | Notes |
|---|---|---|
| CPU | 4 cores (Intel i5/AMD Ryzen 5) | Vector benefits from multiple cores |
| RAM | 8 GB | 4 GB for Vector, 2 GB for Loki, 2 GB OS |
| Storage | 100 GB NVMe SSD | Scales with retention period |
| Network | 100 Mbps | For metric shipping and on-demand streaming |
Production (High-Volume Location)
For high-volume locations (large retail stores, manufacturing floors):| Component | Specification | Notes |
|---|---|---|
| CPU | 8 cores (Intel i7/Xeon) | Enables parallel processing |
| RAM | 16 GB | Larger buffers, more concurrent queries |
| Storage | 500 GB NVMe SSD | 30-day retention at high volume |
| Network | 1 Gbps | Burst capacity for streaming |
Enterprise (3-Node Cluster)
For mission-critical deployments requiring high availability:| Component | Per Node | Total Cluster |
|---|---|---|
| CPU | 8 cores | 24 cores |
| RAM | 32 GB | 96 GB |
| Storage | 1 TB NVMe | 3 TB (with replication) |
| Network | 10 Gbps | Dedicated management network |
Performance Benchmarks
Log Ingestion Throughput
Measured on recommended single-location hardware (4 cores, 8 GB RAM):| Log Size | Throughput | CPU Usage | Memory |
|---|---|---|---|
| 256 bytes | 85,000 logs/s | 65% | 2.1 GB |
| 512 bytes | 62,000 logs/s | 72% | 2.4 GB |
| 1 KB | 45,000 logs/s | 78% | 2.8 GB |
| 4 KB | 18,000 logs/s | 85% | 3.2 GB |
Log-to-Metric Extraction
Vector’slog_to_metric transform performance:
| Metrics per Log | Throughput Impact | CPU Overhead |
|---|---|---|
| 1 metric | -5% | +8% |
| 3 metrics | -12% | +15% |
| 5 metrics | -18% | +22% |
| 10 metrics | -28% | +35% |
Query Latency (Loki)
Query performance on 7-day retention with 50GB data:| Query Type | Latency (p50) | Latency (p99) |
|---|---|---|
Simple filter ({service="api"}) | 45ms | 180ms |
Regex match (|~ "error") | 120ms | 450ms |
JSON parsing (| json) | 200ms | 800ms |
Aggregation (count_over_time) | 350ms | 1.2s |
| Full-text search | 500ms | 2.5s |
Network Bandwidth
Metric shipping bandwidth (compressed, to cloud):| Locations | Metrics/min | Bandwidth |
|---|---|---|
| 10 | 6,000 | 50 KB/s |
| 100 | 60,000 | 500 KB/s |
| 1,000 | 600,000 | 5 MB/s |
| 10,000 | 6,000,000 | 50 MB/s |
- Typical: 1-10 MB/s per location
- Peak: 50-100 MB/s during incident investigation
Capacity Planning
Storage Calculator
Estimate storage requirements based on your workload:| Logs/sec | Avg Size | Retention | Raw Data | Compressed |
|---|---|---|---|---|
| 1,000 | 512 B | 7 days | 302 GB | 45 GB |
| 5,000 | 512 B | 7 days | 1.5 TB | 225 GB |
| 10,000 | 256 B | 14 days | 2.4 TB | 360 GB |
| 50,000 | 256 B | 7 days | 8.6 TB | 1.3 TB |
Memory Sizing
| Throughput | Vector | Loki | Total Recommended |
|---|---|---|---|
| 10K logs/s | 2 GB | 2 GB | 6 GB |
| 50K logs/s | 4 GB | 4 GB | 12 GB |
| 100K logs/s | 6 GB | 6 GB | 16 GB |
| 200K+ logs/s | 8 GB | 8 GB | 24 GB |
CPU Sizing
Tuning Guidelines
Vector Configuration
Optimize Vector for your workload:Loki Configuration
Optimize Loki for edge deployments:OS-Level Tuning
For high-throughput Linux deployments:Monitoring & Alerting
Key Metrics to Monitor
| Metric | Warning | Critical | Action |
|---|---|---|---|
| CPU usage | >70% | >90% | Scale up or reduce transforms |
| Memory usage | >75% | >90% | Increase RAM or reduce buffers |
| Disk usage | >70% | >85% | Reduce retention or add storage |
| Ingestion rate drop | >20% | >50% | Check sources and network |
| Query latency p99 | >2s | >5s | Optimize queries or add cache |
| Buffer backpressure | >50% | >80% | Scale sink capacity |
Vector Metrics Endpoint
vector_component_received_events_total- Ingestion ratevector_buffer_events- Buffer pressurevector_component_sent_events_total- Output ratevector_component_errors_total- Error rate
Loki Metrics
Loki exposes Prometheus metrics at/metrics:
Key Loki metrics:
loki_ingester_chunks_stored_total- Storage growthloki_request_duration_seconds- Query latencyloki_ingester_memory_chunks- Memory pressureloki_distributor_bytes_received_total- Ingestion rate
Sample Prometheus Alerts
Scaling Strategies
Vertical Scaling
When to scale up a single node:| Symptom | Solution |
|---|---|
| CPU consistently >80% | Add cores or upgrade CPU |
| Memory pressure / OOM | Add RAM, reduce buffers |
| Disk I/O bottleneck | Upgrade to NVMe, add RAID |
| Query timeouts | Add RAM for cache, faster storage |
Horizontal Scaling (Multi-Node)
When to deploy a cluster:- High availability requirement - Deploy 3+ nodes with replication
- Throughput >200K logs/s - Distribute ingestion load
- Multi-tenant isolation - Separate workloads
- Geographic distribution - Regional edge clusters
Best Practices
Right-size your hardware
Right-size your hardware
Start with recommended specs and monitor for 2 weeks before scaling. Over-provisioning wastes resources; under-provisioning causes data loss.
Use SSDs, not HDDs
Use SSDs, not HDDs
Loki’s write patterns require fast random I/O. NVMe SSDs provide 10-100x better performance than spinning disks.
Set retention limits
Set retention limits
Always configure retention limits to prevent disk exhaustion. Ring buffer semantics ensure oldest logs are deleted first.
Batch sink writes
Batch sink writes
Configure Vector sinks to batch writes. Larger batches reduce network overhead and improve throughput.
Limit metric cardinality
Limit metric cardinality
High-cardinality labels (user IDs, request IDs) explode storage. Use log fields for high-cardinality data, labels for low-cardinality.
Monitor buffer backpressure
Monitor buffer backpressure
Buffer backpressure indicates sinks can’t keep up. Investigate sink bottlenecks before increasing buffer sizes.