hi, i'm prashant.
been here for 0.000000000 years
Derid (10+ users)
A lightweight tool that audits websites for crawlability by robots.txt and metadata, checking access for LLMs, AI tools, and improving SEO.
Zempt (15+ users)
This extension lets you instantly get simple explanations for any text you select on a webpage within the same page without disrupting your browsing.
AntiSlopsquat (50+ Installs)
An open-source python package which saves your projects from hallucinated and malicious package imports from LLMs.
AutoPulse
Real-time vehicle tracking and visualization system built using Pub/Sub, Flink, and BigQuery. Streams live car telemetry, visualizes movements on interactive map.
QueryEase
Query Ease is a user-friendly application that bridges the gap between natural language and database queries. Helping users to interact with data effortlessly.
VecMem
Local MCP server with a vector memory, providing memory layer to recall and relate past context for Claude desktop.
Implementing DeepSeek-OCR on Google Colab
DeepSeek recently released DeepSeek-OCR, the research paper of it focuses on vision text compression, the model can decode thousands of text tokens from few hundred vision tokens. I wanted to test this, so I set up a small Colab pipeline to see how well it works ...
Read on MediumHow Do LLMs Decide the Next Token?
Large Language Models (LLMs) like ChatGPT, Gemini, or Claude generate text one piece at a time. They don’t write full sentences in one go. Instead, they decide the next token, add it to the text, then repeat the process again and again until the response is complete ...
Read on MediumUnderstanding Artificial Neural Networks (ANNs): A Beginner's Guide
Artificial Neural Networks (ANNs) are one of the most important concepts in machine learning and artificial intelligence. Inspired by how the human brain works, ANNs are designed to recognize patterns, make predictions, and solve problems. This article explains ANNs in simple terms with formulas, diagrams, and examples ...
Read on MediumKV Cache Recycling to Expand Usable Context Capacity in Low Parameter LLMs
accepted for publication in IJRSI 2026
Whether attention key value (KV) states computed for one prompt for a small LLM can be reused to accelerate inference on a new similar prompt, giving an increase to the space to its context memory using token recycling.