Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Monday 20 November 2023

Evaluating the success of an AI&ML use case

              Data science team has finished development of the current version of the ML model & has reported an accuracy or error metric. But you are not sure how to put that number in context. Whether that number is good or not good enough.

                In one of my previous blogs, I have addressed the issue of AI investment and how long before the business can know if the engagement has some potential or it is not going anywhere. This blog can be considered an extension of the above-mentioned blog. If you haven’t checked it out already, please visit : https://anirbandutta-ideasforgood.blogspot.com/2023/07/investment-on-developing-ai-models.html

                In my previous blog, I spoke about the KPIs like Accuracy and Error as thumb-rules to quickly assess on the potential success of the use case. In this blog, I will try to add more specificity or relativeness to it.

                Fundamentally, to evaluate the latest performance KPI of your AI&ML model, there are 3 ways you can go about it, in independence or in combination.

Consider human level performance metric.

For AI use cases which has the primary objective of replacing human effort, this can be considered the primary success metric. For example, if for a particular process the current human error stands at 5%, and the AI can have less or equal to 5% error rate, it can be determined a valuable model. Because AI with the same error rate, bring along with it - smart automation, speeding up the process, negligible down-time etc.

Example: Tasks which needs data entry can easily be replicated by AI. But the success criteria for AI does not need to be 100% accuracy for adoption, but just must match the accuracy which the human counterpart was delivering, to be adopted for real word deployment.

Base Model metric

In use cases for which problem areas getting addressed are more theoretical in nature, or the discovery of the business problem that can get addressed is in progress, its best to create a quick simple base model and then try to improve the model with each iteration.

For example: Currently I am working on a system to determine if a content is created by AI or not. For the lack of any past reference based to which the accuracy can be compared, I have taken this approach to determine the progress.

Satisfying & optimizing metric

We outline both, a metric that we want the model to do as good as possible (we call this optimizing metric) while also meeting some minimum standard which makes it functional and valuable in real life scenarios (we call this satisfying metric)

Example: For Home Voice Assistant, the optimizing metric would be the accuracy of a model hearing exactly what someone said. The satisfying metric would be that the model does not take more than 100 ms to process what was said.

Wednesday 18 October 2023

AI TRUST & ADOPTION – THE METRICS TO MONITOR

 

Trust is critical to AI adoption. With more deployment of next generation of AI models, building trust on these systems becomes even more vital and difficult. For example, although with the amazing capabilities Generative AI, LLMs are delivering, it brings along with it the trouble of it being larger, complex, and opaque than ever. This makes identification of the right metrics and continuously monitoring and reporting them imperative.

Below are some of the most critical metrics that every organization & business should be continuously monitoring and have the capability to report them as and when necessary.

DATA

       Date of instances

       Date processed.

       Owner & steward

       Who created it?

       Who funded it?

       Who’s the intended user?

       Who’s accountable?

       What do instances (i.e., rows) represent?

       How many instances are there?

       Is it all of them or was it sampled?

       How was it sampled?

       How was it collected?

       Are there any internal or external keys?

       Are there target variables?

       Descriptive statistics and distributions of important and sensitive variables

       How often is it updated?

       How long are old instances retained?

       Applicable regulations (e.g., HIPAA)

 

MODELS

       Date trained.

       Owner & steward

       Who created it?

       Who funded it?

       Who’s the intended user?

       Who’s accountable?

       What do instances (i.e., rows) represent?

       What does it predict?

       Features

       Description of its training & validation data sets

       Performance metrics

       When was it trained?

       How often is it retrained?

       How long are old versions retained?

       Ethical and regulatory considerations

 

BIAS remains one of the most difficult KPI to define & measure. Hence, I am excited to find some measures which can contribute measuring presence of BIAS in some format.

  •          Demographic representation: Does a dataset have the same distribution of sensitive subgroups as the target population?
  •          Equality of opportunity: Like equalized odds, but only checks the true positive rate.
  •          Average odds difference: The difference between the false positive and true positive
  •          Demographic parity: Are model prediction averages about the same overall and for sensitive subgroups? For example, if we’re predicting the likelihood to pay a phone bill on time, does it predict about the same pay rate for men and women? A t-test, Wilcoxon test, or bootstrap test could be used.
  •          Equalized odds: For Boolean classifiers that predict true or false, are the true positive and false positive rates about the same for sensitive subgroups? For example, is it more accurate for young adults than for the elderly?
  •          Average odds difference: The difference between the false positive and true positive
  •          Odds ratio: Positive outcome rate divided by the negative outcome rate. For example, (likelihood that men pay their bill on time) / (likelihood that men don’t pay their bill on time) compared to that for women.
  •          Disparate impact: Ratio of the favorable prediction rate for a sensitive subgroup to that of the overall population.
  •          Predictive rate parity: Is model accuracy about the same for different sensitive subgroups? Accuracy can be measured by things such as precision, F-score, AUC, mean squared error, etc.

But considering all the above, we must be very sensitive and be cognizant of the business & social context while identifying our above mentioned “sensitive group.”

By no means, it is a exhaustive list, but only a start towards a safer & fairer digital ecosystem. I will try my best to consolidate new information.

 

Thanking dataiku, some information collected from dataiku report: How to build trustworthy AI systems.

Tuesday 4 July 2023

Investment on developing AI&ML models – timelines & diminishing return

 

One of the most popular questions that I often get asked by the stakeholders is about the timelines required for a ML model to finish development. I will try to address the subtlety of this topic in this writeup.

AI development is a unique scenario where you are expected to deliver an innovation. It is a special case where the resource required is uncertain. And hence it is sometimes very difficult to understand when & where to “STOP”.

When I talk to businesses one of the questions, what I stress about the most, is for them to define what an “MVP” solution is to them. That is with what minimum accuracy or maximum error rate the AI solution would still be useful for their business.

If you are investing on AI use cases one of the concepts, I would recommend you understand is – AI resourcing & diminishing return. Please look at the graph below –

 



            So, what I suggest to the AI investors are if you haven’t reached an MVP by the point of maximum return, “STOP”. For example – By the end of PoMR if the model is still with an error rate of 30%, and that is something that does not work for your business, may be AI cannot solve this for you. Maybe it needs a completely different approach to solve this. Whatever is the case, deploying more resource is not the solution.

            Driving from my experience with all the AI&ML use cases I have worked for almost a decade now, a general thumb rule which I recommend is – The accuracy or error rate, that you get at the end of 3 months is your Point of maximum return. You should reach an MVP by then. Beyond that it should be fine tuning or customizing to specific business needs. By then if it it’s still miles apart from your business objective, may be its time to pull the plug.

            This is again an opinion piece, and these has been my experience. Will be glad to hear how the journey has been for you.

Monday 19 June 2023

AI, ML, Data Science - frequent consultation advises to leadership

 In this current post I have tried to compile most questions, discussions and queries I come across while consulting Data Science road maps with leaders and managers. I am hoping this compilation will add value to the other leaders and managers too who may have at some point wondered about them but didn’t get the opportunity to have those discussions. Many of you may have a better response or more exposure to some of the questions, I have tried to compile it based on the best knowledge I have and how I go about explaining them.

Please reach out, if you think you have a question or point that appears a lot during consultation and is worth discussed upon.

1. Should we build this product or capability in-house or get it from a vendor?
A vendor product will always be a generalized one to cut across as many businesses as possible as they thrive on repeat-ability. While when you build in-house you can make it more customized to a smaller set of use cases or scenarios and may be create a better differentiation.
So please ask yourself –
· When partnering with a vendor what role do I play? What stops the vendor from partnering with the business directly in future? What is my value addition, is there any risk I may become insignificant.
· What kind of team I have? If you have a great engineering team may be you want to do more stuffs in-house and keep a bigger piece of pie for yourself.
· What is my core capability? Is the capability needed in line with our core skill or is it something we want to learn, then maybe we should do it in house, or it is something we just want to get done, then may be the best way is to get a vendor involved.

2. We have created certain Analytics use cases. But we see several other teams also creating similar use cases.
Differentiation of analytics product or use cases are driven by each and combination of below –
a) Deep domain knowledge
b) Combination of data from different systems brought together on a dynamic big data system
c) Deep or mature algorithm applied
If your use cases are easy to replicate it’s most probably on a shallow data, with very general domain knowledge applied with basic Data Science techniques.

3. Are we using the kind of AI that is used for technologies like Self Driving car?
Yes and No. Internally all these technologies uses combinations of neural net and reinforcement learning. We also for different use cases have used variation of same and similar technologies. But technologies like self-driving car works on image or vision data, which we generally don’t do. Our use cases are mostly based on numerical, text and language processing data.

4. Vendor says their product is being used by Amazon. So should we go ahead and buy it?
May be it is being used by Amazon or any similarly big companies, but ask the vendor if their product is being used for a mission critical process or for some PoC or to store or process data like click-stream data which is not business critical. This makes all the difference, if the logos vendors show you are using the vendors technology for business critical projects or some non-critical processes.

5. We are showing the use case to the business but it’s not making much of an impact.
Story telling needs to be improved. Every analytics use case must be associated with a story that should end with the business making or saving money. If the story cannot relate how the engineering will improve the customer’s financial bottom-line, the customer business does not care about it, irrespective of how good the engineering is.

6. Now we have a data scientist in our team. Can we now expect more insights from our data?
Data Scientists alone cannot ensure project success. Data Engineers, Big Data and Cloud Infra Engineers are equally important part of the technical team. Without the infrastructure in place and data being stored in the infra in proper format, Data Scientists cannot do his or her magic.

7. We are finding very difficult to hire data scientists and big data developers.
Though there is no dearth of CVs, finding genuinely talented people with actual knowledge and production implementation knowledge is difficult. And among the few, most are already paid well and on good projects. So whenever a decision is taken to hire senior data science talents, a 6 month time frame should be kept in hand.

8. What is the difference between ML and AI?
Though you will find several answers to this in the internet one good way I have found to explain it to a business person, without the jargons is as below. By definition ML comes within the broader scope of AI. But to understand better and remember, Ai is something that is built to replicate human behavior. A program is called a successful AI when it can pass a Turing Test. A system is said to pass a Turing test when we cannot differentiate the intelligence coming from a machine and a human. ML is a system which you create to find pattern in a big data set that is too big for a human brain to comprehend. On a lighter note – if you ask a machine 1231*1156 and it answers it in a fraction of a second it is ML and if it pauses, makes some comment and answers after 5 mins, like a human, it is AI.

9. Why aren’t we using a big data Hadoop architecture but using RDBMS like MSSQL. Oracle.
RDBMS products like MSSQL, Oracle are still viable analytics products and are not replaceable by Big Data tools for many scenarios. Deciding on a data store or a data processing engine involves a lot of factors like ACID-BASE properties, type and size of data, current implementation, skill set of the technical team etc. So doing an analytics project does not make Hadoop or NoSQL product default.

10. Here is some data, give me some insight.
This is the first line of any failed initiative. A project which is not clear about the business problem it wants to solve is sure to fail. Starting on an analytics project without a clear goal in mind and for the sake of just adding a data science project to the portfolio and no road-map how this will eventually contribute to company goal, is a waste of resource and will only end in failure.

Saturday 3 June 2023

AI/ML and human brain - Similarities and their implication in Corporate and Political campaigns

 The premise of this article is how our brain is similar to a machine, and the biases and errors the machine experiences are experienced by the brain too, and in some cases this may be leveraged to drive success at a marketing or electoral campaign.


1. The good orator - No election has ever been won without a good orator at the helm. However good your policies are at the end of the day it needs to be sold. A good orator is like quality data to a machine. To drive learning in a machine, if we even want to propagate biases in some way, it needs to be clearly and strategically delivered. This delivery of data, to steer the mind and machine in a certain way, have also been called Propaganda in warfare.

2. The 360 degree delivery – To drive a target candidate to take a certain action, stimulus must be provided from every direction. It is widely accepted the same objective can be achieved with much less effort with stimulus from multiple dimension, rather than larger one-dimensional effort. It is the same reason we come across our favorite brand across multiple platforms like TV, radio, newspaper, social media, online ads etc. According to a successful marketing platform ‘personalized messaging across email, SMS, direct mail, and more, alongside personalized online response’ leads to a much successful marketing campaign. AI systems inspired by this characteristic of the brain is always advised to be built around with data from as many diverse source systems as possible.

3. What is in it for me - Human beings by the very nature of their existence and survival instinct mostly react to news and events that directly affect their well-being. The same idea is implemented in reinforcement learning where the agent takes measure to fulfill objective which in our case is survival. So to make your target audience to take notice of any policy or idea, it should be narrated as tightly coupled with the audiences’ well-being. It should answer their basic question how it will affect me.

4. Relative rather than absolute – Human brains intuitively understand something relative much better than anything absolute. If you ask most people if a deal is good, they will generally say it is good or bad based on how other people are getting deals. In the same way you can manipulate a machine to label something on a particular high or low range, by strategically infusing data on the other end of the scale. In the same way during a political or marketing campaign it is not enough to advertise your positives but also important to emphasize your opponents weaknesses.

5. Confirmation bias – Definition - “Confirmation bias occurs from the direct influence of desire on beliefs. When people would like a certain idea or concept to be true, they end up believing it to be true. They are motivated by wishful thinking. This error leads the individual to stop gathering information when the evidence gathered so far confirms the views or prejudices one would like to be true.” This is a psychological error that can always be used to one’s advantage during a political or marketing campaign. Political and Corporate organizations have at various times taken advantages of this by implementing a biased belief system at an early age of life or consumption cycle.

6. ML bias – Machine Learning applications develop inherent biases when fed data which may be tilted towards a certain stereotypical trend due to the flawed nature in which society develops. Like real-world example of a machine learning model that’s designed to differentiate between men and women in pictures. When the training data contains more pictures of women in kitchens than men in kitchens, or more pictures of men writing computer code than women writing computer code, then algorithm is trained to make incorrect inferences about the gender of people engaged in those activities. Human brain can be manipulated in the same way too. If you give enough example to a brain, associating people of certain characteristic to certain nature of actions – either good or bad, human brains inherently start associating these people to those activities without extensive thought.

But however amazing our brain is, it still have certain flaws which have been inherited by ML and AI processes as these are inspired by the brain itself. But I guess all these imperfections are what keeps us human.

Tuesday 9 May 2023

AI framework for self learning Q&A agent

 Pantomath AI bot framework

Definition: A pantomath is a person who wants to know and knows everything.

What is Pantomath

Pantomath is an AI framework inspired by human learning pattern developed with ZERO cost using ZERO propriety software or framework using open source R that can learn any subject and respond to a query when asked about it. It can learn any domain, topic and subject and keeps getting better and more knowledgeable with time and experience.

Why Pantomath?

· Pantomath has been designed on the idea of general AI which has the capability of learning different domains.
· While different enterprise solutions may be present they concentrate towards a particular domain
· It has been developed from open source framework hence there is no attached proprietary price.
· Business can easily enable Pantomath to automate FAQs, knowledge management, menu handling, computer trouble shooting etc.? If anything that has the pattern of resolving a query and does not need a detailed conversation or diagnosis, Pantomath can scale extremely well and can save significant cost while improving Customer Satisfaction.
· It has one of the best research oriented, scalable, technical back-end developed.

Pantomath: How does it work?

Steps

1. Enter few sample Q&A on different topics for it to start the learning & conversation.
2. On given the sample it tries to learn how it can answer questions on same topics asked differently or similar questions on the same topic.
3. And with each conversation it reinforces and reconfirms its knowledge.
4. If it does not know a topic it confirms it does not know about it and asks for more knowledge materials or hints to be fed into it.
5. With more conversations it learns more about language subtlety and gathers knowledge about different topics. (Just like us).

Pantomath: How does it constantly learn?

Pantomath’s learning model has been inspired from David Kolb’s learning model and human learning pattern from birth to adulthood.

David Kolb’s learning model

1. Concrete Experience- (a new experience of situation is encountered, or a reinterpretation of existing experience).
2. Reflective Observation (of the new experience. Of particular importance are any inconsistencies between experience and understanding).
3. Abstract Conceptualization (Reflection gives rise to a new idea, or a modification of an existing abstract concept).
4. Active Experimentation (the learner applies them to the world around them to see what results). Reference: https://medium.com/@johnharrydsouza/david-kolb-s-cycle-of-learning-2777d150d09e#.xitj0ph53

Human learning development


1. After Birth – A baby is born with basic human instincts while it gradually learns initial movements.
2. Toddler – It starts interacting with environment, still learning basic movements with the guidance by parents at this stage being very critical.
3. Childhood – It has almost completed learning its basic movements, while most of its learning coming from interacting with environment while asking for guidance much less.
4. Adulthood – It has learned most of its survival skills from learning independently from environment while rarely needing guidance now.

Learning trajectory for the algorithm with experience
Similarly the bot with more experience and maturity needs less guidance and is more self-sufficient.

Pantomath: Stages of development

Text similarity – It is implemented using text similarity pattern matching and recommending responses that might be best suitable for the current question.
A string metric is a metric that measures similarity or dissimilarity (distance) between two text strings for approximate string matching or comparison. Corpus-Based similarity is a semantic similarity measure that determines the similarity between words according to information gained from large corpora.

Neural Net – After that we would like to implement an ANN (Artificial Neural Net) to understand the weightage of different words used in the conversation and recommend the best response.
Reinforcement learning is an area of Machine Learning. Reinforcement. It is about taking suitable action to maximize reward in a particular situation. It is employed by various software and machines to find the best possible behavior or path it should take in a specific situation.

Reinforcement Learning – Next stage would be assigning an agent to the bot which will interact with its environment and would be rewarded for the right response and penalized for the wrong response. Thus the agent with time will learn and adjust itself to better responses.
Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns. It is a network or circuit of neurons, or in a modern sense, an artificial neural network, composed of artificial neurons or nodes.

Working code example

The below sample use case is to feed Pantomath with sample computer trouble shooting scenarios.
Some records of the initial set being.
Internally it creates an auto mapping rule in its brain. Let’s see how the mapping table looks internally at each stage of conversation.
Initial stage, it creates auto tags for all the sample issues provided which would help it to recommend a solution using text similarity pattern via string metric. –
*Trouble shooting steps are as above sample and I haven’t rewritten them to save space.
Scenario 1 - Show a result based on probability match
First issue we asked the bot –
mouse not moving correctly
Bot suggests –
"Check if the mouse is securely plugged into the computer. If not, plug it in completely.\r\n· Check to see if the cord has been damaged. If so, the mouse may need replacing.\r\n· If you are using a cordless mouse, try pushing the connection button on the underside of the\r\nmouse to reestablish a connection.\r\n· Clean the mouse, especially on the bottom.
Which if we move up on the sample provided we see is a solution for the issue –
Then the bot reconfirms if I am happy with the solution it provided. If I say YES, it responds.
"Glad to hear i could help and it has made me wiser"

Scenario 2 - Store queries which it could not resolve
Next scenario let’s try to ask a question which it may not know and needs to learn externally.
Let’s ask-
Not able to use the mouse
Bot says
Sorry i can’t help you regarding this. I will pass you to the next level engineer
And we see in its internal memory map it has made another entry with the new query and auto tags and there is no TroubleShootingSteps corresponding to it as it does not know how to solve it.

So the bot goes back to its human handler and tells this is a topic it does not know and haven’t been able to learn from the conversations and asks it to provide some knowledge.

Scenario 3 - show multiple options ordered based on similarity - learn based on the option chosen - Next time show the better option
Next scenario let’s try to ask a question for which it may have multiple recommendation
Let’s ask-
"keyboard problem"
Bot gives two recommendation
[1] "Make sure the keyboard is connected to the computer. If not, connect it to the computer.\r\nIf you are using a wireless keyboard, try changing the batteries.\r\nIf one of the keys on your keyboard gets stuck, turn the computer off and clean with a damp\r\ncloth.\r\nUse the mouse to restart the computer."
and
[2] "Clean the keys thoroughly"
BOT then asks me to confirm which actually solved the ticket so that it can refine its learning. I said 2 as the second recommendation solved the ticket for me. Bot responds -
"Glad to hear i could help and it has made me wiser"
And we see in its internal memory map it has made another entry with the new query and auto tags the number 2 solution for this question, so next time on being asked the same thing it can respond better,
So when again asked the same question
"keyboard problem"
It responds
[1] "Clean the key throughly"                                                                                                                                                          
and
[2] "Make sure the keyboard is connected to the computer. If not, connect it to the computer.\r\nIf you are using a wireless keyboard, try changing the batteries.\r\nIf one of the keys on your keyboard gets stuck, turn the computer off and clean with a damp\r\ncloth.\r\nUse the mouse to restart the computer."                                 
Interestingly on learning from its last interaction it now suggests Clean the keys thoroughly as the first option and the other one as the next option.  

Scenario 4 - The user does not like any option chosen, store queries which it could not resolve
Next scenario let’s again try to ask a question but we don’t choose its recommendation
Let’s ask-
“The mouse is slow"     
Bot gives me a recommendation  
[1] "Restart your computer.\r\n· Verify that there is at least 200-500 MB of free hard drive space. To do so, select Start and\r\nclick on My Computer or Computer. Then highlight the local C drive by clicking on it once.\r\nSelect the Properties button at the top left-hand corner of the window; this will display a\r\nwindow showing …                                             
BOT then asks me to confirm if I found the recommendation usable to which I said NO.
BOT responds –
"Sorry i could not help you. We will add content to fulfill your request in future."
And we see in its internal memory map it has made another entry since it realized it needs to learn more on this issue. So the bot goes back to its human handler and tells this is a topic it does not know enough of and asks to give more hint so that it can give a better hint next time.
What Pantomath is not
· Pantomath is not a conversational agent. It is a Q&A agent. Though it learns from each conversation and remembers how the user responded to its previous answers it does not remember personal conversational context or non-business critical facts.
· Pantomath is not a diagnosis tool. Though it may with time, learn to suggest recommendation for general questions, it is not build to find the root-cause using series of questions.
· Pantomath cannot go and open tickets for you in another environment. It provides information to the user, but it cannot take action for them.

Conclusion

Business on a daily basis employs enormous human resource to respond to user questions on various topics. While some of them need complex diagnosis, most of them are rudimentary and repetitive in nature. Pantomath can be easily deployed and scaled to automate a major proportion of this task. It’s an AI platform developed based on human learning pattern. It learns from conversations and asks for help wherever it needs, and gets more mature with time. It can adjust to any domain and learn any topic.
Business can easily enable Pantomath to automate
· FAQs
· knowledge management
· menu handling
· Computer trouble shooting etc.
If anything that has the pattern of resolving a query and does not need a detailed conversation or diagnosis, Pantomath can scale extremely well. It is extremely cost effective as it is completely build without using any third party enterprise component.