HomeResearch

Research Labs

/experimental/reports

Deep-dive systems research in distributed architectures, fog computing, and AI/ML. Each project is backed by rigorous simulations, technical reports, and quantifiable metrics.

Distributed Systems

Fog Computing, Edge Networks

UAV Networks

Mobility, Caching, RL Agents

AI/NLU

NER, Intent Classification

Project Completed#01

LOKI

Desktop-Native Voice Assistant with Hybrid NLU

99.77%

NER F1-Score

99.77%

Precision

1389

Samples

Overview

A privacy-first, offline-capable voice assistant that outperforms cloud-dependent alternatives in latency and system integration. Features a novel hybrid NLU engine that combines a fast embedding-based classifier with a local LLM fallback for complex intent understanding.

Architecture

┌─────────────────────────────────────────────────────────────┐
│                    LOKI Hybrid NLU Pipeline                  │
├─────────────────────────────────────────────────────────────┤
│  ┌─────────────┐                                           │
│  │ Voice Input │                                           │
│  └──────┬──────┘                                           │
│         │ (Faster-Whisper)                                 │
│         ▼                                                  │
│  ┌─────────────┐    Confidence > 0.6?    ┌────────────────┐│
│  │ Transcript  │ ───────────┬──────────▶ │ FastClassifier ││
│  └─────────────┘            │            │ (Embeddings)   ││
│                             │            └───────┬────────┘│
│                             │                    │         │
│                             ▼                    │         │
│                    ┌────────────────┐            │         │
│                    │ LLM Fallback   │            │         │
│                    │ (Ollama/Phi)   │            │         │
│                    └────────┬───────┘            │         │
│                             │                    │         │
│                             ▼                    ▼         │
│                    ┌────────────────────────────────────┐  │
│                    │      Intent & Parameter Merger     │  │
│                    └────────────────────────────────────┘  │
└─────────────────────────────────────────────────────────────┘

Technical Decisions

DecisionTrade-offOutcome
Hybrid NLU ArchitectureComplexity vs Latency/AccuracyUsed 'FastClassifier' (Embeddings) for common commands (<60ms) and 'LLMClassifier' (Ollama) only for complex queries, balancing speed and flexibility.
Synthetic Data GenerationRealism vs Training VolumeGenerated 1389 labeled sentences to train the CRF model, achieving 99.77% F1-score on parameter extraction without expensive manual labeling.
Agent-Based DispatchMonolithic vs ModularDecoupled NLU from execution. New capabilities (e.g., Volume Control, Calculator) can be added as independent agents without retraining the core model.

Tech Stack

PythonFaster-WhisperSentence-TransformersOllama (LLM)sklearn-crfsuite

Key Metrics

NER F1-Score99.77%
Precision99.77%
Samples1389
Simulation Study#02

MUCEDS

Multi-UAV Cost-Efficient Deployment Scheme

~90%

Cache Hit Ratio

4x vs Base

Profit Increase

~21 Steps

Latency

Overview

A joint optimization framework for UAV-assisted Vehicular Edge Computing Networks (VECNs). Integrates realistic mobility modeling (SUMO) with Hierarchical Reinforcement Learning (HRL) and LSTM-based predictive caching to minimize latency and maximize system profit.

Architecture

┌─────────────────────────────────────────────────────────────┐
│                    MUCEDS Control Loop                       │
├─────────────────────────────────────────────────────────────┤
│  ┌──────────────┐       ┌──────────────┐      ┌───────────┐ │
│  │  SUMO World  │◀─────▶│ Python Agent │◀────▶│ Prediction│ │
│  │ (Real Maps)  │       │ (HRL Control)│      │  (LSTM)   │ │
│  └──────┬───────┘       └──────┬───────┘      └───────────┘ │
│         │                      │                            │
│         ▼                      ▼                            │
│  Traffic Density        Action (Count, Vel)                 │
│         │                      │                            │
│         └──────────┬───────────┘                            │
│                    ▼                                        │
│           ┌──────────────────┐                              │
│           │ System Reward    │                              │
│           │ (Profit - Cost)  │                              │
│           └──────────────────┘                              │
└─────────────────────────────────────────────────────────────┘

Technical Decisions

DecisionTrade-offOutcome
Hierarchical RL (HRL)Training Stability vs Control GranularityDecomposed problem: DDQN (Outer) optimizes UAV count (Strategic), while MADDPG (Inner) optimizes position (Tactical). Stabilized training convergence.
Spatial-Temporal LSTMCompute Overhead vs Reactive CachingPredictive caching based on user request patterns increased Cache Hit Ratio from ~20% (Zipf) to ~90%, significantly reducing cloud backhaul costs.
SUMO IntegrationSimulation Complexity vs RealismReplaced random waypoint models with real city topologies (Delhi, Mumbai). Proved UAVs could learn to track realistic traffic hotspots.

Tech Stack

PythonPyTorchSUMOTraCIDDQNMADDPGLSTM

Key Metrics

Cache Hit Ratio~90%
Profit Increase4x vs Base
Latency~21 Steps
Report Available#03

GVMP

Gateway Validation Module Placement

95%

Network Reduction

Low (Edge)

E2E Latency

Optimized

Load Balance

Overview

A novel sibling-aware placement strategy for Fog Computing. Unlike standard Edge-Ward placement which pushes tasks to the cloud when a node is full, GVMP validates resources on neighboring 'sibling' nodes via the gateway, keeping processing at the edge.

Architecture

┌─────────────────────────────────────────────────────────────┐
│                    GVMP Placement Logic                      │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│       [Cloud]  <---- (3) Only if siblings full              │
│          ▲                                                  │
│          │                                                  │
│      [Gateway]  <--- (2) Check Siblings via Gateway         │
│      /                                                     │
│     /                                                      │
│ [Edge 1] -- [Edge 2]                                        │
│    ▲           ▲                                            │
│    │           │                                            │
│  Task 1      (1) Try Local First                            │
│                                                             │
│  Result: Task stays in Fog Layer, avoiding Cloud Backhaul   │
└─────────────────────────────────────────────────────────────┘

Technical Decisions

DecisionTrade-offOutcome
Sibling-AwarenessSearch Complexity vs Network LoadChecking sibling nodes before cloud offloading reduced network usage by 89-95% compared to EWMP, as proven in iFogSim simulations.
Virtual Testbed ValidationImplementation Effort vs Simulation OnlyBuilt a 15-container Docker testbed with Linux Traffic Control (tc) to validate findings. Confirmed 16ms Edge latency vs 5600ms Cloud latency.

Tech Stack

iFogSimDockerLinux TCPrometheusGrafana

Key Metrics

Network Reduction95%
E2E LatencyLow (Edge)
Load BalanceOptimized