What I write about

Thursday, 15 May 2025

Intelligent Proctoring System Using OpenCV, Mediapipe, Dlib & Speech Recognition

ProctorAI: Intelligent Proctoring System Using OpenCV, Mediapipe, Dlib & Speech Recognition

ProctorAI: Intelligent Proctoring System Using OpenCV, Mediapipe, Dlib & Speech Recognition

ProctorAI is a real-time AI-based proctoring solution that uses a combination of computer vision and audio analysis to detect and alert on suspicious activities during an exam or assessment. This system uses OpenCV, Mediapipe, Dlib, pygetwindow, and SpeechRecognition to offer a comprehensive exam monitoring tool.

👉 View GitHub Repository

🔍 Key Features

  • Face detection and tracking using mediapipe and dlib
  • Eye and pupil movement monitoring for head and gaze tracking
  • Audio detection for identifying background conversation
  • Multi-screen detection via open window tracking
  • Real-time alert overlays on camera feed
  • Interactive quit button on the camera feed

⚙️ How It Works

  1. The webcam feed is captured using OpenCV.
  2. Face and eye landmarks are detected using mediapipe.
  3. dlib tracks the pupil by analyzing the eye region.
  4. System checks for head movement, eye and pupil movement, and determines if face is present.
  5. Running applications are scanned using pygetwindow to detect multiple active windows.
  6. Background audio is captured and analyzed using speech_recognition.
  7. Alerts are displayed on-screen in real-time if any suspicious activity is detected.

🧠 Tech Stack

  • OpenCV - Video capture and frame rendering
  • Mediapipe - Facial landmark and face detection
  • Dlib - Pupil detection and facial geometry
  • SpeechRecognition - Audio analysis
  • PyGetWindow - Application window detection
  • Threading - For concurrent execution of detection modules

🚨 Alerts Triggered By

  • Missing face (student left or covered the webcam)
  • Sudden or excessive head movement
  • Unusual pupil movement (possibly looking elsewhere)
  • Multiple open windows (indicative of cheating)
  • Background voice detected (someone speaking)

📦 Installation

git clone https://github.com/anirbanduttaRM/ProctorAI
cd ProctorAI
pip install -r requirements.txt

Also, make sure to download shape_predictor_68_face_landmarks.dat from dlib.net and place it in the root directory.

▶️ Running the App

python main.py

🖼️ Screenshots

🎥 Demo Video

📌 Future Improvements

  • Face recognition to match identity
  • Web integration for remote monitoring
  • Data logging for offline audit and analytics
  • Improved natural language processing for audio context

🤝 Contributing

Pull requests are welcome! For major changes, please open an issue first to discuss what you would like to change.

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.


Made with ❤️ by Anirban Dutta

Saturday, 12 April 2025

Emergence of adaptive, agentic collaboration

Emergence of Adaptive, Agentic Collaboration

Emergence of Adaptive, Agentic Collaboration

A playful game that reveals the future of multi-agent AI systems

🎮 A Simple Game? Look Again

At first glance, it seems straightforward: move the rabbit, avoid the wolves, and survive. But behind the cute aesthetics lies something powerful—a simulation of intelligent, agent-based collaboration.

Gameplay Screenshot

🐺 Agentic AI in Action

Each wolf is more than a chaser. Under the guidance of a Coordinator Agent, these AI entities adapt roles on the fly:

  • 🐾 Chaser Wolf: Follows the rabbit directly
  • 🧠 Flanker Wolf: Predicts and intercepts
This is not hardcoded—it’s adaptive, collaborative intelligence in motion.
Wolves Coordinating

📊 Interactive Diagram: Wolf Agent Roles

Chaser Wolf
Interceptor Wolf
Coordinator Agent
Click any node to learn more

🌍 Beyond the Game: Real-World Impact

This simulation offers insights for:

  • 🚚 Smart delivery fleets
  • 🧠 Healthcare diagnosis agents
  • 🤖 Robotic manufacturing units

🎥 Watch It in Action

© 2025 Anirban Dutta. All rights reserved.

Saturday, 29 March 2025

The Complete Picture: Understanding the Full Software Procurement Lifecycle

 If you regularly respond to Requests for Proposals (RFPs), you've likely mastered crafting compelling responses that showcase your solution's capabilities. But here's something worth considering: RFPs are just one piece of a much larger puzzle.

Like many professionals, I used to focus solely on the RFP itself - until I realized how much happens before and after that document gets issued. Understanding this complete lifecycle doesn't just make you better at responding to RFPs; it transforms how you approach the entire sales process.



1. Request for Information (RFI): The Discovery Phase

Before any RFP exists, organizations typically begin with an RFI (Request for Information). Think of this as their research phase - they're exploring what solutions exist in the market without committing to anything yet.

Key aspects of an RFI:

  • Gathering market intelligence about available technologies

  • Identifying potential vendors with relevant expertise

  • Understanding current capabilities and industry trends

Why this matters: When you encounter vague or oddly specific RFPs, it often means the buyer skipped or rushed this discovery phase. A thorough RFI leads to better-defined RFPs that are easier to respond to effectively.

Real-world example: A healthcare provider considering AI for patient records might use an RFI to learn about OCR and NLP solutions before crafting their actual RFP requirements.


2. Request for Proposal (RFP): The Formal Evaluation

This is the stage most vendors know well - when buyers officially outline their needs and ask vendors to propose solutions.

What buyers are really doing:

  • Soliciting detailed proposals from qualified vendors

  • Comparing solutions, pricing, and capabilities systematically

  • Maintaining a transparent selection process

Key to success: Generic responses get lost in the shuffle. The winners are those who submit tailored proposals that directly address the buyer's specific pain points with clear, relevant solutions.


3. Proposal Evaluation: Behind Closed Doors

After submissions come in, buyers begin their assessment. This phase combines:

Technical evaluation: Does the solution actually meet requirements?
Financial analysis: Is it within budget with no hidden costs?
Vendor assessment: Do they have proven experience and solid references?

Pro tip: Even brilliant solutions can lose points on small details. Include a clear requirements mapping table to make evaluators' jobs easier.


4. Letter of Intent (LOI): The Conditional Commitment

When a buyer selects their preferred vendor, they typically issue an LOI. This isn't a final contract, but rather a statement that says, "We plan to work with you, pending final terms."

Why this stage is crucial: It allows both parties to align on key terms before investing in full contract negotiations.

For other vendors: Don't despair if you're not the primary choice. Many organizations maintain backup options in case primary negotiations fall through.


5. Statement of Work (SOW): Defining the Engagement

Before work begins, both parties collaborate on an SOW that specifies:

  • Exact project scope (inclusions and exclusions)

  • Clear timelines and milestones

  • Defined roles and responsibilities

The value: A well-crafted SOW prevents scope creep and ensures everyone shares the same expectations from day one.


6. Purchase Order (PO): The Green Light

The PO transforms the agreement into an official, legally-binding commitment covering:

  • Payment terms and schedule

  • Delivery expectations and deadlines

  • Formal authorization to begin work

Critical importance: Never start work without this formal authorization - it's your financial and legal safeguard.


7. Project Execution: Delivering on Promises

This is where your solution comes to life through:

  • Development and testing

  • Performance validation

  • Final deployment

Key insight: How you execute often matters more than what you promised. Delivering as promised (or better) builds the foundation for long-term relationships.


8. Post-Implementation: The Long Game

The relationship doesn't end at go-live. Ongoing success requires:

  • Responsive support and maintenance

  • Continuous performance monitoring

  • Regular updates and improvements

Strategic value: This phase often determines whether you'll secure renewals and expansions. It's where you prove your commitment to long-term partnership.


Why This Holistic View Matters

Understanding the complete procurement lifecycle enables you to:

  • Craft more effective proposals by anticipating the buyer's full journey

  • Develop strategies that address needs beyond the immediate RFP

  • Position yourself as a strategic partner rather than just another vendor

Final thought: When you respond to an RFP, you're not just submitting a proposal - you're entering a relationship that will evolve through all these stages. The most successful vendors understand and prepare for this entire journey, not just the initial document.




Saturday, 22 February 2025

The Journey Beyond Learning: My Year at IIM Lucknow

A year ago, I embarked on a journey at IIM Lucknow, driven by the pursuit of professional growth. I sought knowledge, expertise, and a refined understanding of business dynamics. But as I stand at the end of this transformative chapter, I realize I am leaving with something far greater—a profound evolution of my spirit, character, and perception of life.

What began as a quest for professional excellence soon unfolded into a deeply personal and spiritual exploration. The structured curriculum, case discussions, and strategic frameworks were invaluable, but what truly shaped me was the realization that growth is not just about skills—it’s about resilience, patience, and self-discipline. And nowhere was this lesson more evident than in a simple yet powerful idea: “I can think, I can wait, I can fast.”

The Wisdom of Siddhartha: The Lessons We Often Overlook

Hermann Hesse’s Siddhartha tells the story of a man in search of enlightenment. When asked about his abilities, Siddhartha humbly states:
“I can think, I can wait, I can fast.”
At first glance, these may seem like ordinary statements. But as I reflected on them, I saw their profound relevance—not just in spiritual journeys but in our professional and personal lives as well.

Thinking: The Power of Deep Contemplation

In an environment as intense as IIM, quick decisions and rapid problem-solving are often celebrated. But I realized that the true power lies in the ability to pause, reflect, and analyze beyond the obvious. Critical thinking is not just about finding solutions—it is about questioning assumptions, challenging biases, and understanding perspectives beyond our own. The ability to think deeply is what sets apart great leaders from the rest.

Waiting: The Strength in Patience

Patience is an underrated virtue in a world that demands instant results. IIM taught me that waiting is not about inaction—it is about perseverance. There were times when ideas took longer to materialize, when failures felt discouraging, when the next step seemed uncertain. But waiting allowed me to develop resilience, to trust the process, and to realize that true success is not immediate—it is earned over time.

Fasting: The Discipline to Endure

Fasting is not just about food—it is about the ability to withstand hardships and resist temptations. In the corporate world, in leadership, and in life, there will be moments of struggle, of deprivation, of difficult choices. The ability to endure, to sacrifice short-term pleasures for long-term goals, is what defines true strength. At IIM, I learned to push beyond my comfort zone, to embrace challenges with determination, and to understand that true discipline is the key to transformation.

More Than an Institution—A Journey of Self-Discovery

IIM Lucknow was not just an academic experience; it was a crucible that shaped my mind, spirit, and character. I came seeking professional advancement, but I left with something far deeper—an understanding of what it means to be a better human being.

Beyond business models and strategy decks, I learned that the greatest asset is self-awareness, the greatest skill is patience, and the greatest success is inner peace.

A heartfelt thanks to Professor Neerja Pande, whose guidance in communication not only refined my professional skills but also enlightened us with a path of spirituality and wisdom, leading to profound personal and professional growth.

As we strive for excellence in our careers, let us not forget to nurture the qualities that make us better individuals—the ability to think, to wait, and to fast. Because in mastering these, we master not just our professions but our very existence.

This is not just my story—it is a reminder for all of us, and a lesson we must pass on to the next generation.



Friday, 31 January 2025

The Evolution of AI Assistants: From Generic to Personalized Recommendations

In the world of AI, the difference between a generic bot and a personalized assistant is like night and day. Let me walk you through the journey of how AI assistants are evolving to become more tailored and intuitive, offering recommendations that feel like they truly "know" you.

The Generic Bot: A One-Size-Fits-All Approach

The first bot we’ll discuss is a generalized AI assistant built on generic data. It’s designed to provide recommendations and answers based on widely available information. While it’s incredibly useful, it has its limitations. For instance, if you ask it for a restaurant recommendation, it might suggest popular places but won’t consider your personal preferences. The responses may vary slightly depending on how the question is phrased, but fundamentally, the recommendations remain the same for everyone.

This bot is a great starting point, but it lacks the ability to adapt to individual users. It doesn’t know your likes, dislikes, or unique needs. It’s like talking to a knowledgeable stranger—helpful, but not deeply connected to you.

The Personalized Bot: Tailored Just for You

Now, let’s talk about the second bot—a fine-tuned, personalized assistant. This bot is designed specifically for an individual, taking into account their preferences, habits, and even past interactions. For example, if the user is a vegetarian, the bot will recommend vegetarian-friendly restaurants without being explicitly told each time. It remembers the user’s preferences and uses that information to provide highly relevant recommendations.

This level of personalization makes the bot feel like a close friend who truly understands you. It’s not just an assistant; it’s a companion that grows with you, learning from your interactions and adapting to your needs.

The Value of Personalization in AI

The shift from generic to personalized AI assistants represents a significant leap in technology. Here’s why it matters:

  1. Relevance: Personalized bots provide recommendations that align with your unique preferences, making them far more useful.
  2. Efficiency: By knowing your preferences, the bot can save you time by filtering out irrelevant options.
  3. Connection: A personalized assistant feels more intuitive and human-like, fostering a stronger bond between the user and the technology.

The Future of AI Assistants

As AI continues to evolve, we can expect more assistants to move toward personalization. Imagine a world where your AI assistant not only knows your favorite foods but also understands your mood, anticipates your needs, and offers support tailored to your personality. This is where AI is headed—a future where technology feels less like a tool and more like a trusted companion.

Final Thoughts

The journey from generic to personalized AI assistants highlights the incredible potential of AI to transform our lives. While generic bots are useful, personalized assistants take the experience to a whole new level, offering recommendations and support that feel uniquely yours. As we continue to innovate, the line between technology and human-like understanding will blur, creating a future where AI truly knows and cares about you.

Thanks for reading, and here’s to a future filled with smarter, more personalized AI!



The Evolution of AI Assistants: From Generic to Personalized

Tuesday, 31 December 2024

Optimizing Azure Document Intelligence for Performance and Cost Savings: A Case Study

    As a developer working with Azure Document Intelligence, optimizing document processing is crucial to reduce processing time without compromising the quality of output. In this post, I will share how I managed to improve the performance of my text analytics code, significantly reducing the processing time from 10 seconds to just 3 seconds, with no impact on the output quality.

Original Code vs Optimized Code

Initially, the document processing took around 10 seconds, which was decent but could be improved for better scalability and faster execution. After optimization, the processing time was reduced to just 3 seconds by applying several techniques, all without affecting the quality of the results.

Original Processing Time

  • Time taken to process: 10 seconds

Optimized Processing Time

  • Time taken to process: 3 seconds

Steps Taken to Optimize the Code

Here are the key changes I made to optimize the document processing workflow:

1. Preprocessing the Text

Preprocessing the text before passing it to Azure's API is essential for cleaning and normalizing the input data. This helps remove unnecessary characters, stop words, and any noise that could slow down processing. A simple preprocessing function was added to clean the text before calling the Azure API. Additionally, preprocessing reduces the number of tokens sent to Azure's API, directly lowering the associated costs since Azure charges based on token usage.

def preprocess_text(text):
    # Implement text cleaning: remove unnecessary characters, normalize text, etc.
    cleaned_text = text.lower()  # Example: convert to lowercase
    cleaned_text = re.sub(r'[^\w\s]', '', cleaned_text)  # Remove punctuation
    return cleaned_text

2. Specifying the Language Parameter

Azure Text Analytics API automatically detects the language of the document, but specifying the language parameter in API calls can skip this detection step, thereby saving time.

For example, by specifying language="en" when calling the API for recognizing PII entities, extracting key phrases, or recognizing named entities, we can directly process the text and skip language detection.

# Recognize PII entities pii_responses = text_analytics_client.recognize_pii_entities(documents, language="en") # Extract key phrases key_phrases_responses = text_analytics_client.extract_key_phrases(documents, language="en") # Recognize named entities entities_responses = text_analytics_client.recognize_entities(documents, language="en")

This reduces unnecessary overhead and speeds up processing, especially when dealing with a large number of documents in a specific language.

3. Batch Processing

Another performance optimization technique is to batch multiple documents together and process them in parallel. This reduces the overhead of making multiple individual API calls. By sending a batch of documents, Azure can process them in parallel, which leads to faster overall processing time.

# Example of sending multiple documents in one batch 
documents = ["Document 1 text", "Document 2 text", "Document 3 text"
batch_response = text_analytics_client.analyze_batch(documents)

4. Parallel API Calls

If you’re working with a large dataset, consider using parallel API calls for independent tasks. For example, you could recognize PII entities in one set of documents while extracting key phrases from another set. This parallel processing can be achieved using multi-threading or asynchronous calls.

Performance Gains

After applying these optimizations, the processing time dropped from 10 seconds to just 3 seconds per execution, which represents a 70% reduction in processing time. This performance boost is particularly valuable when dealing with large-scale document processing, where speed is critical.

Conclusion

Optimizing document processing with Azure Document Intelligence not only improves performance but also reduces operational costs. By incorporating preprocessing steps, specifying the language parameter, and utilizing batch and parallel processing, you can achieve significant performance improvements while maintaining output quality and minimizing costs by reducing token usage.

If you’re facing similar challenges, try out these optimizations and see how they work for your use case. I’d love to hear about any additional techniques you’ve used to speed up your document processing workflows while saving costs.

A deep technical breakdown of how ChatGPT works

How ChatGPT Works – A Deep Technical Dive 🌟 INTRODUCTION: The Magic Behind the Curtain Have you ever asked Cha...