Saturday 28 January 2023

The rise of Entrepreneurial Engineers

 

I often use the sentence – Good engineers build things; great engineers build valuable things. But the idea of value is abstract, what might be valuable to one person may not be valuable to another. But it is not so, when it comes to profitable organizations. Here ultimately it should end up – in contributing to organization value while complying all governance checks. I have seen engineering colleagues wondering if we should worry about all this. May be 5 years back the answer would have been not necessary, but now it’s necessary. The rise of Entrepreneurial Engineers is like what we observed with rise of Citizen Data Scientist, but also the other way around. The idea is, though this engineering team still works with the Business Owners and Product owner & management team, they have skin in the game of what gets build. They understand business priorities and are driven towards solving customer problem and adding business values. They are aware how to apply their engineering mindset to solve impactful problem using sustainable solutions. 

With teams I work with, it is something we continuously work towards. Some of the steps, I work towards with my team, and I can recommend are:

Before adopting any feature, we try to ask ourselves:

  1.  How will this feature add value?
  2. What will happen if we don’t adopt this feature?
  3.  How is this problem currently addressed?
  4. Can we find out the financial impact & Return on Investment on this feature?

All this can be documented under different artifacts like – CONOPS document, Value document, User story etc. 

Teams I work with, with experience, we have discovered, learned, and created a process of continuous – conversation & collaboration. It is a cycle of continuous conversation with other stake holders. Some of the check points we have developed over the years, are - 

  1. Do an internal tech team analysis – feasibility, compatibility & valuation analysis
  2. Present understanding, POC or plan to business leaders, owners & SMEs.
  3. Broader Technology team peer presentation
  4. Discussion with vendor to clarify assumptions, if required
  5. Discuss with enterprise architects how this piece will interact with overall architecture
  6. Then start the implementation

But there may be scenarios, where, before being able to do any of this, you might need to have an MVP in place before you can have any discussion. This strategy is extremely useful when the overall team is not too sure, building what would be valuable. Its when, all the stakeholders, see a tangible MVP in front of them, they start to share improvement ideas. In most new product development lifecycles, first few features, are developed like this, until the process matures.

In the last decade with more engineers in board rooms than ever, this new breed of entrepreneurial engineers will be a significant player in the organizational vision & roadmap. The organizations which will thrive & grow, will need to have 2 things:


·         Superior product-market fitment

·         And a good product - Design & Architecture

 

Organizations should put a lot of focus in these two areas. Once these two are in place, it enables the Sales & Marketing team to effectively do theirs. Organizations & teams should be very aware that all operation & enabling process should be focused around improving these 2 areas. Once a solid foundation of these is in place, Customer relationship & Delivery should take care of themselves.

 

Sunday 18 December 2016

FUNDAMENTAL AND BASIC DIGITAL MARKETING ‘To–Do’s FOR SMEs

First of all, just to make something clear I am not a digital marketing wizard.But rather I have been founders of few organizations and had to do the digital marketing myself, as I didn’t have enough fund to have a digital marketer.
To put a disclaimer my suggestions may not be the best by the book, but they are the ones which worked best for me, are affordable and are easy to do.

So here they go, these in the order of which I implemented for my organization –
1) Have a Facebook page:
I know many people will rather have a website first, and that is not wrong by any means. The advantage of a Facebook page is-it is free, whereas for the website you will need a domain which will cost you some money. Having a Facebook page will let you have some form of internet presence as a starter. You can share the page with your friends and acquaintances and get your idea or product validated.It will be a bonus if you can get some kind of Facebook page ad going, which you subscribe to with a reasonable amount.

2) Have a twitter account:
Whatever holds true for Facebook also holds true for twitter. Why we need both twitter and Facebook? Because there is a lot of social media users who are exclusive to either Facebook or twitter.

3) Create a website:
A website is a World Wide Web footprint every business and organization needs. It tells the visitor who you are and what you do.A good website goes a long way in creating trust and understanding among visitors.The domain should be representative of the organization.It lets one have a single point of entry for all social media platforms.

4) Register your website with Google and enable Google Analytics:
Register your website with Google to be crawled by Google bots thus enabling it for Google ranking. I think no one will dispute the importance of a website to show up in Google ranking.You can do some basic SEO, but do not spend too much effort or resources for SEO now. Then go ahead and set up your Google analytics. The basic version of Google analytics is a free product and can be extremely useful to analyze your visitor's demography and website performance.

5) Google Ad words and PPC:

It’s said – ‘Google’s’ 2nd page is the best place to hide a dead body.' And it’s not so easy to get to Google’s first page, especially if you have chosen a popular keyword. Well researched keywords and strategic Pay Per Click campaign can give one some good inorganic reach until he/she gets their website's organic ranking higher

Friday 24 June 2016

Prediction of EURO 2016 using just a spreadsheet and some publicly available data

The following analysis is an exercise I performed to predict the outcome of EURO 2016 .My resources were just a spreadsheet and some publicly available data.

Some of the hypothesis taken for this prediction are:
  1. Players playing in bigger clubs(club ratings as per UEFA) will perform better in Europe.
  2. Best players in European countries play in European club.
  3. The strength of the 23-member squad will better determine a nation's performance rather than just starting 11.
  4. Every position be it goal-keeping, defense, midfield or striker matters equally to the success of a nation.

Factors not taken into consideration:
  1. Form of the player
  2. Team work
  3. Confidence of an individual player
  4. Home advantage
  5. Injury
  6. Credibility of the manager

Steps followed in the analysis:
Step 1: 23 squad members list of each country is collected -- their names and the clubs they play for.
Step 2: List of 400 clubs across Europe was collected and their standings as per UEFA.

For my final analysis I only considered top 100 clubs from the list of 400 clubs with the hypothesis that a player can only make an impact if he plays among the top 100 clubs in Europe. I divided the 100 clubs in 10 segments of 10 clubs each. Then I rated each club with the top segment getting 10 points and the bottom getting 1 respectively. Then I looked up all the players in each nation and rated them based on their club ratings. The result is a cumulative rating for each country. Then going to the fixtures I concluded on the results with the assumption that a nation with higher rating will progress through the tournament whereas a nation with a lower rating will not.

Prediction :

Quarter Final teams:Ukraine,Spain,England,Belgium,Germany,Italy,France,Russia

Semi Final teams:Spain,Belgium,Germany,France

Final:Spain,Germany

Champion:Spain
Runner-up:Germany

The analysis may seem simplistic, but one of the main objective of the exercise is to encourage readers to start doing analysis on simple use cases and realize beyond the smoke screen of data science jargon that its not that complicated.Once an analyst completes a use case like this, he experiences a complete analytics life cycle.But what will generally change for a detailed analysis will be -- the volume of data and the numbers of factors to be considered..


Learnings that we may receive from this exercise can be -

Searching for the right data : The use case and the hypothesis tells us what data to search, gather or ask the business for.Lots of time in my career I have seen analysts being handed some data and asked to find something interesting.That should not be the case.It should be the business use case driving the analysis.

Data preparation: You must have heard the 80-20 rule,where 80 % time is spent preparing the data and 20 % time doing the actual analysis.My data was web links so I had to scrape it, massage it and clean it to get it in a shape that can be used for analysis.

Feasibility of the variables to consider : The complexity and the accuracy of the algorithms mostly depend on the suitability of the algorithm for the use case, the extent of variables considered and the size of the data analyzed. Looking at the timeline and resources in hand one should decide how extensively one wants to go about it.

Consideration of hypothesis : Hypothesis considered should be clearly mentioned as part of the analysis.The result of the analysis will prove or disprove our hypothesis.


Saturday 30 April 2016

EMI – The dream killer – an Indian software engineer's perspective.


'Well,I had a dream' – The words you will often hear from souls STUCK in IT suffering from mid-career crisis. 'But what happened,why didn't you follow it,DELL did,JOBS did,GATES did,our own BANSALS did,you may ask.''Well you see,abroad its easier financially, and I had EMIs to pay'.Then who is to blame – the poor souls who have to leave their hometown,and come and settle down in the unknown land,and has to start everything from scratch ? EMI is a reality for them. Although there are few chosen ones who gets to travel the fairyland(read 'US','Europe') and can save enough money to pay the devil's due in advance but for others there is no way out. So the EMI killed their dreams – Dreams to start their business,to set up that start up,to make that travel. But can you blame them ? I won't. Anyways they get blamed enough everyday by the traffic police, corporation, maids, autowallas for not learning the local language anyways.:).

Sunday 5 July 2015

10 THINGS TO KEEP IN MIND FOR AN ANALYTICS PROJECT

  1. Before you start solving your analytics use case, ask yourself – how significant the change will be for the business if you get the perfect answer to your question. If the change is not significant enough don’t even bother to start solving it.
  2. The objective of your project should be a business problem or a strategic solution. If you see yourself solving a tactical or IT problem, remember you are impacting the means to an end but not the end.
  3. Decide what you want to do – assign a weight age to a factor customer already knew about or give insight on a factor customer didn’t knew affects his business. For example, while doing analytics on student’s attendance, an insight for the former case would be – whenever it rains the student’s attendance drops by 27 %.Here the school always knew, rain has an adverse effect on the attendance but they never knew it was 27 %.(This can appear as contradicting to point 2, but for some cases that is what specifically asked by the customer. But whenever possible try avoiding it.)For the latter case it would be whenever there is a bank holiday student’s attendance will be negatively affected. This is something the school never knew about.
  4. Always make sure you ask your customers deviation percentage to the actual business value that can be considered as prediction success. Because as good as your analytics may be, you will never be able to capture all the influencing factors affecting his business numbers. For example – The customer can say ‘if your predicted sales is 5% on either sides of my actual sales I will consider it as correct.’
  5. While doing analytics for your customer, whenever possible, try avoiding giving your insight as an absolute value .Because no matter how many factors you may have included in your analysis, there will always be those unknown ones, which can turn your prediction wrong. Rather try to rank the factors influencing customers business based on their influence. The business thus can plan better what to concentrate on as priority.
  6. Ask the customer what is the offset of an event. Means what is the lag between an event occurring and its results getting reflected. For example an ad campaign being launched and the timeline around which the sales gets lift may have an offset of 2 months among them. This changes from product to product, depending on the factors and results. For some it may even be instantaneous.
  7. Try to understand from your customer what does significant change means to him. A value may be significant change to one customer but not to another.
  8. Don’t try to pick the use case, pick a use case. When you are talking to the business, let the business choose the use case for you. Just provide them the below matrix.
  9. Whichever use case you choose to implement will have multiple source systems of data. For most of the cases it’s not possible to include every source system as part of the analytics. To decide which source systems to spend your time on, use the below graph –  
  10. Don’t try to answer all the business questions. Rather try to give insights which will enable the business to ask more questions. No one will know his business more than the business owner.

Wednesday 18 February 2015

HADOOP – HOT INTERVIEW QUESTIONS - What is the responsibility of name node in HDFS?

  1. Name node is the master daemon for creating metadata for blocks stored on data nodes.
  2. Every data node sends heartbeat and block report to Name node.
  3. If Name node does not receive any heartbeat then it simply identifies that the data node is dead. The Name node is the single point of failure. Without Name node there is no metadata and the Job Tracker can’t assign tasks to the Task Trackers.
  4. If Name node goes down HDFS cluster in inaccessible. There is no way for the client to identify which data node has free space as there is no metadata available in the data node.

Saturday 12 July 2014

Bit level SQL coding for database synchronization

Abstract

Client X has an ERP system with a master module and several instances of its client installed across the Globe. The business requires all the client instances to be in sync with the master with exceptions. The way to achieve that was to keep the metadata of the instances, stored in the system databases of the client and master in sync. One way of doing that was to restrict the users from making changes to the client or run synchronization scripts at regular intervals to sync the master and clients.

Introduction

The objective of the project was to ensure that all client databases are using a group standard set of data in key tables. It also allows client databases to set certain local persisted fields whist blocking changes to other fields. The way to achieve that was to develop Triggers on the local tables to enforce the business rules. And in cases where it does not stop local users with DBA access from being able to disable the triggers and update the synchronized data, develop stored procedures run every night to overwrite and persist the group data standards.

So the solution designed was:
The target implementations were:

Client Synchronization Triggers

o   SQL Triggers on the database to enforce the business logic by allowing or preventing changes to the synchronized data

Master Audit Triggers

o   SQL Triggers on the database will track any changes made to the synchronized tables

Synchronization Scripts

o   SQL scripts contain the business logic and will copy the necessary values from the tables in the GBLDEVALLSynchronisation schema to the Production tables.

Problem Definition

One such requirement for us was to check whether the setting options applied in clients where in sync with the master. Each of the setting options were applied through check boxes.
When we check the metadata tables we found out all the setting options were defined as a screen within a column value in a table with each setting option represented by a bit.
Now the users were allowed to make changes to certain settings, while prevented from changing others.
So our job was to search for bits at particular positions within the string and preventing the user to change those bits from 1 to 0 or vice-versa. If  in case someone with administrator privilege disables the trigger and change the restricted values, then run scripts at night to search for those bits and compare their values between master and clients, and in case difference reverting back the client’s value.

High Level Solution


For us there were two deliverables:
  1.  Triggers-To restrict the users from making changes.
  2.  Synchronization script-In case someone makes the change, identifies the change and revert it.

Since it was a bitwise operation the operators available to us were
  1. &(bitwise AND)
  2. |(bitwise OR)
  3.   ^(bitwise exclusive OR)

The challenges we faced were
  1.  Searching for bits at particular positions.
  2.   Turning on or off a bit if a mismatch is found.

Solution Details

Let us try to explain the solution we implemented for the trigger. Let’s take an example for the bit position – 256.The meaning of this requirement is value at the bit position 256 should be same for the client and master, whereas the value for the other bit positions may be different. In other words the client is not allowed to change value at the 256 bit position while they can change the value at other bit positions.
Let’s consider the master has hexadecimal value – 421, binary value – (110100101).
And the client tries to change it to hex value of 165, binary value – (010100101).
So the result should be, it should throw an error.Beacuse the value at the 256 bit position is different. Though the values at the other positions are different it should not matter.
So the solution we came up with is applying & (Bitwise AND) the values with bit position we are searching for and comparing the values.
[
Example of how & (Bitwise AND) works.
(A & B)
0000 0000 1010 1010
0000 0000 0100 1011
-------------------
0000 0000 0000 1010
]
For Master - 421 & 256 = 256.
i.e. 110100101 & 100000000 = 100000000.
For Client – 165 & 256 = 0
i.e. 010100101 & 100000000 = 000000000
Since the values are different it throws an error.
Let’s take another example –
The master has hexadecimal value – 420, binary value – (110100100).
And the client tries to change it to hex value of 421, binary value – (110100101).
So the result should be, it should not throw an error.Beacuse the value at the 256 bit position is same. Though the values at the other positions are different it should not matter.
For Master - 420 & 256 = 256.
i.e. 110100100 & 100000000 = 100000000.
For Client – 421 & 256 = 256
i.e. 110100101 & 100000000 = 100000000
Since the values are same it does not throw an error.
Now let’s try to explain what we did for the stored procedure. The purpose of the stored procedure is to turn on or off the value if a standard value has been changed by the client which he is not allowed to do.
Let’s consider the same example of the value being changed at 256th position.
Note there are two parts to it.In case the client has changed the value at 256th position to 1 we need to revert it back to 0 and in case the client has changed it to 0 we need to revert it back to 1.
The challenge was to achieve the toggle function.
Part 1 -
Now let’s consider a value of 111001010 which in hexadecimal is 458.
[
Example of how Bitwise Exclusive OR works
(A ^ B)   
         0000 0000 1010 1010
         0000 0000 0100 1011
         -------------------
         0000 0000 1110 0001
]
The function we came up with to achieve the toggle function is:
(The number ^ the position) & 67108863.
So for our example:
(458 ^ 256) & 67108863
Or (111001010 ^ 100000000) & 11111111111111111111111111
= 011001010.
Part 2 -
Now let’s consider a value of 1011000110 which in hexadecimal is 710.
So the underlined value at the 256th position is 0 and it needs to be toggled to 1.
[
Example of how Bitwise Exclusive OR works
(A ^ B)   
         0000 0000 1010 1010
         0000 0000 0100 1011
         -------------------
         0000 0000 1110 0001
]
As mentioned before the function for toggling is:
(The number ^ the position) & 67108863.
So for our example:
(710 ^ 256) & 67108863
Or (1011000110 ^ 100000000) & 11111111111111111111111111
= 1111000110.
Hence we achieved our toggling function.
We took 67108863 as the value since for us the highest position required for toggling is 33554432 and our value is next higher to it.So we made sure we covered our requirement.

Solution Benefits

With our deliverable s we achieved the objective of ensuring that all client databases are in sync with the master and are using a group standard set of data in key tables. It blocked changes to protected fields while allowing clients to change certain local persisted fields.
While along with it, it took care of the following key considerations:
·         The process does not cause heavy network traffic and is as optimized as possible
·         The process is able to handle different versions of Viewpoint
·         The process is able to enforce the business logic (i.e. prohibit local changes from being made to synchronized fields that are NOT persisted)
·         The process is updateable to handle changing business logic
·         The process only updates ViewPoint tables during a specified time window
·         The process is fully auditable
·         The process  facilitates a full DTAP lifecycle

Solution extend-ability

As next set of steps we would include the auditing functionality and it would be fully automated. The design and functionality details are mentioned below:

Master Audit Triggers

Overview
The master audit triggers will be used to audit any changes made to the synchronized tables in the Group ViewPoint Master database.

Auditing strategy

Let’s consider the following example.
A table SUPPLIER_MSTR exists as:
Supplier_Key
Supplier_Code
Supplier_Name
Supplier_State
123
ABC
Acme Supply Co
Manchester

A corresponding table SUPPLIER_MSTR_AUDIT for it exists as:
Supplier_Key
Supplier_Code
Supplier_Name
Supplier_State
User
Event_Type
Event_DateTime
123
ABC
Acme Supply Co
Manchester
Alan.Donald
Insert
23-2-2012 10:15 AM

It tells us John created this record on 23rd Feb 2012 as 10:15 AM.
Now this morning I changed it to –
Supplier_Key
Supplier_Code
Supplier_Name
Supplier_State
123
ABC
Acme Supply Co
London

SUPPLIER_MSTR will show no history and will show the current scenario.
So SUPPLIER_MSTR_AUDIT table will be now –
Supplier_Key
Supplier_Code
Supplier_Name
Supplier_State
User
Event_Type
Event_DateTime
123
ABC
Acme Supply Co
Manchester
Alan.Donald
Insert
23-2-2012 10:15 AM
123
ABC
Acme Supply Co
London
Anirban.Dutta
Update
21-6-2013 12:46 PM

Another record added.
SUPPLIER_MSTR:
Supplier_Key
Supplier_Code
Supplier_Name
Supplier_State
123
ABC
Acme Supply Co
London
         124
        XYZ
Worlds Greatest Retailer
Leeds

SUPPLIER_MSTR_AUDIT:
Supplier_Key
Supplier_Code
Supplier_Name
Supplier_State
User
Event_Type
Event_DateTime
123
ABC
Acme Supply Co
Manchester
Alan.Donald
Insert
23-2-2012 10:15 AM
123
ABC
Acme Supply Co
London
Anirban.Dutta
Update
21-6-2013 12:46 PM

But David came and deleted the first record.
SUPPLIER_MSTR:
Supplier_Key
Supplier_Code
Supplier_Name
Supplier_State
         124
        XYZ
Worlds Greatest Retailer
Leeds

SUPPLIER_MSTR_AUDIT:
Supplier_Key
Supplier_Code
Supplier_Name
Supplier_State
User
Event_Type
Event_DateTime
123
ABC
Acme Supply Co
Manchester
Alan.Donald
Insert
23-2-2012 10:15 AM
123
ABC
Acme Supply Co
London
Anirban.Dutta
Update
21-6-2013 12:46 PM
124
XYZ
Worlds Greatest Retailer
Leeds
Anirban.Dutta
Insert
21-6-2013 12:50 PM
123
ABC
Acme Supply Co
London
Arron.Lennon
Delete
21-6-2013 12:53 PM

Audit table structures

Each table will have its own audit table.
The AUDIT tables will replicate the actual table along with few extra columns.
The extra columns being.
User nvarchar(100)
Event_Type nvarchar(50) only (‘Insert,’Update’,’Delete’ should be allowed).
Event_DateTime DateTime.


Stored Procedure run status logging

We are planning to capture the run status of a SP with the following table structure.
SchemaName nvarchar(100),
StoredProcedure nvarchar(128),
UserName nvarchar(100),
StartTime datetime,
EndTime datetime,
EventStatus nvarchar(500),
ReasonForFailure nvarchar(max)

The granularity of the information capture will be object level.
For example:
If we assume a four Stored Procedures to be loading four tables:
COAAccount
COAHeader
DMIxFields
EntityTypes
Then the corresponding entries in the table would be :
     SchemaName
StoredProcedure
UserName
StartTime
EndTime
EventStatus
ReasonForFailure
GBLDEVALLSECCO
USP_SynchroniseCoaAccount
Anirban.Dutta
1:15:0010 PM
1:15:0017   PM

Success

GBLDEVALLSECCO
USP_SynchroniseCoaHeader
Anirban.Dutta
1:15:0018 PM
1:15:0025 PM
Success

GBLDEVALLSECCO
USP_SynchroniseDmixfields
Anirban.Dutta
1:15:0019 PM
1:15:0022 PM
Success

GBLDEVALLSECCO
USP_SynchroniseEntityTypes
Anirban.Dutta
1:15:0023 PM
1:15:0026 PM
Success





Deliverables

1.       Stored Procedure code.
2.       Trigger code.

Conclusion

The above solution was provided by using/extending VP MDT to copy the synchronized tables from Group ViewPoint Master to each client database during the VP change window, then using stored procedures run every night to overwrite and persist the group data standards.

Triggers were added to the Group ViewPoint Master for audit purposes as well as being required on the local tables to enforce the business rules.