Showing posts with label LLM. Show all posts
Showing posts with label LLM. Show all posts

Wednesday, 20 November 2024

Building BloomBot: A Comprehensive Guide to Creating an AI-Powered Pregnancy Companion Using Gemini API

Solution approach for BloomBot

1. Problem Definition and Goals

Objective:

  • Develop BloomBot, an AI-powered chatbot tailored for expecting mothers to provide:
    • Pregnancy tips
    • Nutrition advice by week
    • Emotional support resources
    • A conversational interface for queries

Key Requirements:

  • AI-Powered Chat: Leverage Gemini for generative responses.
  • User Interface: Interactive and user-friendly chatbot interface.
  • Customization: Adapt responses based on pregnancy stages.
  • Scalability: Handle concurrent user interactions efficiently.

2. Architecture Overview

Key Components:

  1. Frontend:

    • Tool: Tkinter for desktop GUI.
    • Features: Buttons, dropdowns, text areas for interaction.
  2. Backend:

    • Role: Acts as a bridge between the frontend and Gemini API.
    • Tech Stack: Python with google.generativeai for Gemini API integration.
  3. Gemini API:

    • Purpose: Generate responses for user inputs.
    • Capabilities Used: Content generation, chat handling.
  4. Environment Configuration:

    • Secure API key storage using .env file and dotenv.

3. Solution Workflow

Frontend Interaction:

  • Users interact with BloomBot via a Tkinter-based GUI:
    • Buttons for specific tasks (e.g., pregnancy tips, nutrition advice).
    • A dropdown for selecting pregnancy weeks.
    • A text area for displaying bot responses.

Backend Processing:

  1. Task-Specific Prompts:
    • Predefined prompts for tasks like fetching pregnancy tips or emotional support.
    • Dynamic prompts (e.g., week-specific nutrition advice).
  2. Free-Form Queries:
    • Use the chat feature of Gemini to handle user inputs dynamically.
  3. Response Handling:
    • Parse and return Gemini's response to the frontend.

Gemini API Integration:

  • Models Used: gemini-1.5-flash.
  • API methods like generate_content for static prompts and start_chat for conversational queries.

4. Implementation Details

Backend Implementation

Key Features:

  1. Pregnancy Tip Generator:
    • Prompt: "Give me a helpful tip for expecting mothers."
    • Method: generate_content.
  2. Week-Specific Nutrition Advice:
    • Dynamic prompt: "Provide nutrition advice for week {week} of pregnancy."
    • Method: generate_content.
  3. Emotional Support Resources:
    • Prompt: "What resources are available for emotional support for expecting mothers?"
    • Method: generate_content.
  4. Chat Handler:
    • Start a conversation: start_chat.
    • Handle free-form queries.

Code Snippet:


class ExpectingMotherBotBackend: def __init__(self, api_key): self.api_key = api_key genai.configure(api_key=self.api_key) self.model = genai.GenerativeModel("models/gemini-1.5-flash") def get_pregnancy_tip(self): prompt = "Give me a helpful tip for expecting mothers." result = self.model.generate_content(prompt) return result.text if result.text else "Sorry, I couldn't fetch a tip right now." def get_nutrition_advice(self, week): prompt = f"Provide nutrition advice for week {week} of pregnancy." result = self.model.generate_content(prompt) return result.text if result.text else "I couldn't fetch nutrition advice at the moment." def get_emotional_support(self): prompt = "What resources are available for emotional support for expecting mothers?" result = self.model.generate_content(prompt) return result.text if result.text else "I'm having trouble fetching emotional support resources." def chat_with_bot(self, user_input): chat = self.model.start_chat() response = chat.send_message(user_input) return response.text if response.text else "I'm here to help, but I didn't understand your query."

Frontend Implementation

Key Features:

  1. Buttons and Inputs:
    • Fetch pregnancy tips, nutrition advice, or emotional support.
  2. Text Area:
    • Display bot responses with a scrollable interface.
  3. Dropdown:
    • Select pregnancy week for tailored nutrition advice.

Code Snippet:


class ExpectingMotherBotFrontend: def __init__(self, backend): self.backend = backend self.window = tk.Tk() self.window.title("BloomBot: Pregnancy Companion") self.window.geometry("500x650") self.create_widgets() def create_widgets(self): title_label = tk.Label(self.window, text="BloomBot: Your Pregnancy Companion") title_label.pack() # Buttons for functionalities tip_button = tk.Button(self.window, text="Get Daily Pregnancy Tip", command=self.show_pregnancy_tip) tip_button.pack() self.week_dropdown = ttk.Combobox(self.window, values=[str(i) for i in range(1, 51)], state="readonly") self.week_dropdown.pack() nutrition_button = tk.Button(self.window, text="Get Nutrition Advice", command=self.show_nutrition_advice) nutrition_button.pack() support_button = tk.Button(self.window, text="Emotional Support", command=self.show_emotional_support) support_button.pack() self.response_text = tk.Text(self.window) self.response_text.pack() def show_pregnancy_tip(self): tip = self.backend.get_pregnancy_tip() self.display_response(tip) def show_nutrition_advice(self): week = self.week_dropdown.get() advice = self.backend.get_nutrition_advice(int(week)) self.display_response(advice) def show_emotional_support(self): support = self.backend.get_emotional_support() self.display_response(support) def display_response(self, response): self.response_text.delete(1.0, tk.END) self.response_text.insert(tk.END, response)

5. Deployment

Steps:

  1. Environment Setup:
    • Install required packages: pip install tkinter requests google-generativeai python-dotenv.
    • Set up .env with the Gemini API key.
  2. Testing:
    • Ensure prompt-response functionality works as expected.
    • Test UI interactions and Gemini API responses.

6. Monitoring and Maintenance

  • Usage Analytics: Track interactions for feature improvements.
  • Error Handling: Implement better fallback mechanisms for API failures.
  • Feedback Loop: Regularly update prompts based on user feedback.



Wednesday, 18 October 2023

AI TRUST & ADOPTION – THE METRICS TO MONITOR

 

Trust is critical to AI adoption. With more deployment of next generation of AI models, building trust on these systems becomes even more vital and difficult. For example, although with the amazing capabilities Generative AI, LLMs are delivering, it brings along with it the trouble of it being larger, complex, and opaque than ever. This makes identification of the right metrics and continuously monitoring and reporting them imperative.

Below are some of the most critical metrics that every organization & business should be continuously monitoring and have the capability to report them as and when necessary.

DATA

       Date of instances

       Date processed.

       Owner & steward

       Who created it?

       Who funded it?

       Who’s the intended user?

       Who’s accountable?

       What do instances (i.e., rows) represent?

       How many instances are there?

       Is it all of them or was it sampled?

       How was it sampled?

       How was it collected?

       Are there any internal or external keys?

       Are there target variables?

       Descriptive statistics and distributions of important and sensitive variables

       How often is it updated?

       How long are old instances retained?

       Applicable regulations (e.g., HIPAA)

 

MODELS

       Date trained.

       Owner & steward

       Who created it?

       Who funded it?

       Who’s the intended user?

       Who’s accountable?

       What do instances (i.e., rows) represent?

       What does it predict?

       Features

       Description of its training & validation data sets

       Performance metrics

       When was it trained?

       How often is it retrained?

       How long are old versions retained?

       Ethical and regulatory considerations

 

BIAS remains one of the most difficult KPI to define & measure. Hence, I am excited to find some measures which can contribute measuring presence of BIAS in some format.

  •          Demographic representation: Does a dataset have the same distribution of sensitive subgroups as the target population?
  •          Equality of opportunity: Like equalized odds, but only checks the true positive rate.
  •          Average odds difference: The difference between the false positive and true positive
  •          Demographic parity: Are model prediction averages about the same overall and for sensitive subgroups? For example, if we’re predicting the likelihood to pay a phone bill on time, does it predict about the same pay rate for men and women? A t-test, Wilcoxon test, or bootstrap test could be used.
  •          Equalized odds: For Boolean classifiers that predict true or false, are the true positive and false positive rates about the same for sensitive subgroups? For example, is it more accurate for young adults than for the elderly?
  •          Average odds difference: The difference between the false positive and true positive
  •          Odds ratio: Positive outcome rate divided by the negative outcome rate. For example, (likelihood that men pay their bill on time) / (likelihood that men don’t pay their bill on time) compared to that for women.
  •          Disparate impact: Ratio of the favorable prediction rate for a sensitive subgroup to that of the overall population.
  •          Predictive rate parity: Is model accuracy about the same for different sensitive subgroups? Accuracy can be measured by things such as precision, F-score, AUC, mean squared error, etc.

But considering all the above, we must be very sensitive and be cognizant of the business & social context while identifying our above mentioned “sensitive group.”

By no means, it is a exhaustive list, but only a start towards a safer & fairer digital ecosystem. I will try my best to consolidate new information.

 

Thanking dataiku, some information collected from dataiku report: How to build trustworthy AI systems.