HomeResearch

Research Labs

/experimental/reports

Deep-dive systems research in distributed architectures, fog computing, and AI/ML. Each project is backed by rigorous simulations, technical reports, and quantifiable metrics.

Distributed Systems

Fog Computing, Edge Networks

UAV Networks

Mobility, Caching, RL Agents

AI/NLU

NER, Intent Classification

Project Completed#01

LOKI (NLP Engine)

Local-First Voice Assistant with Hybrid NLU

99.77%

NER F1-Score

<60ms

Fast-Path Latency

1389

Training Samples

Overview

A privacy-centric voice assistant that performs 100% local inference. It addresses the latency and privacy issues of cloud-based assistants by employing a novel dual-layer intent classification system: utilizing high-speed vector embeddings for common commands and falling back to a local Quantized LLM for complex semantic understanding.

System Architecture

Interactive Architecture

Technical Decisions

DecisionTrade-offOutcome
Hybrid Classifier StrategySystem Complexity vs. ResponsivenessImplemented a 'Fast Path' (Cosine Similarity > 0.6) for instant execution of 90% of commands, reserving the heavy LLM only for complex, novel queries.
CRF for NERModernity vs. EfficiencyChose Conditional Random Fields over BERT for Named Entity Recognition to minimize CPU footprint while maintaining 99.77% F1-score on parameter extraction.
Threaded ArchitectureDev Overhead vs. UXDecoupled audio acquisition (VAD) and inference workers from the UI thread, ensuring the application remains responsive during heavy processing.

Tech Stack

PythonFaster-WhisperSentence-TransformersOllama (Dolphin-Phi)sklearn-crfsuite

Key Metrics

NER F1-Score99.77%
Fast-Path Latency<60ms
Training Samples1389

Resources

Source Code
Simulation Study#02

MUCEDS

Multi-UAV Cost-Efficient Deployment Scheme

~90%

Cache Hit Ratio

4x vs Base

Profit Increase

~21 Steps

Latency

Overview

A comprehensive optimization framework for UAV-assisted Vehicular Edge Computing. It integrates high-fidelity traffic simulation (SUMO) with a Hierarchical Reinforcement Learning (HRL) agent to dynamically position UAVs and a Spatial-Temporal LSTM to predict content demand, significantly reducing service latency.

System Architecture

Interactive Architecture

Technical Decisions

DecisionTrade-offOutcome
Hierarchical RLConvergence Speed vs. ControlDecomposed the problem: DDQN manages the 'Strategic' fleet size, while MADDPG handles the 'Tactical' velocity vectors, stabilizing training.
Predictive Caching (LSTM)Compute Overhead vs. Backhaul LoadProactive content caching based on predicted vehicle trajectories improved Cache Hit Ratio from ~20% (Zipf) to ~90%.
SUMO Physics IntegrationSimulation Speed vs. AccuracyReplaced synthetic Random Waypoint models with real OpenStreetMap data (Delhi/Mumbai), proving system viability in realistic traffic flows.

Tech Stack

PythonPyTorchSUMO (TraCI)DDQN & MADDPGSpatial-Temporal LSTM

Key Metrics

Cache Hit Ratio~90%
Profit Increase4x vs Base
Latency~21 Steps

Resources

Source Code
Testbed Validated#03

Fog Computing Testbed

Gateway Validation Module Placement (GVMP)

95%

Network Reduction

~16ms

Edge Latency

~5600ms

Cloud Latency

Overview

A 4-tier distributed Fog Computing virtual testbed built with Docker to evaluate IoT application placement strategies. The study validates the 'GVMP' heuristic, which prioritizes horizontal resource sharing (sibling nodes) over vertical cloud offloading, using realistic network emulation.

System Architecture

Interactive Architecture

Technical Decisions

DecisionTrade-offOutcome
Sibling-Aware PlacementSearch Latency vs. BandwidthValidating resources on neighbor nodes via the gateway reduced core network usage by 95% compared to standard Edge-Ward placement.
Kernel-Level EmulationSetup Complexity vs. RealismUsed Linux 'tc' and 'netem' within containers to inject real jitter/packet loss, proving the strategy holds up under degraded network conditions.
Cgroup MonitoringImplementation Effort vs. AccuracyBuilt custom monitors reading /sys/fs/cgroup to calculate normalized CPU load, ensuring accurate performance metrics across heterogeneous node types.

Tech Stack

Docker ComposeLinux Traffic Control (tc)Prometheus/GrafanaPython (Flask)Redis

Key Metrics

Network Reduction95%
Edge Latency~16ms
Cloud Latency~5600ms

Resources

Source Code