The safeguards against business process re-engineering processes considering its success & failure factors

1. Business process re-engineering:

 

There are several definitions of business process reengineering (BPR). Klein and Manganelli in their book “The Reengineering Handbook” define it as the “Rapid and radical redesign of strategic, value-added business processes-and the systems, policies and organizational structures that support them-to optimize workflows and productivity within an organization”.

 

Johansson and Mchugh in their book “Business Process Reengineering: Breakpoint Strategies for Market Dominance,” defines it as “How an organization can achieve radical change in performance, as measured by cost, cycle time, service and quality, by the application of a variety of tools and techniques that focus on the business as a set of related customer-oriented core insurance business processes rather than a set of organizational functions.

 

Robert Jacobs in his book, “Real-Time Strategic Change” defines strategic change (similar in concept to BPR) as an “Informed, participative process resulting in new ways of doing business that position an entire organization for success, now and into the future.”

 

The above definitions emphasize dramatic, radical change, usually occurring in a short time frame that affects a core business process that cuts across functional lines and where the people, human empowerment element is crucial for success. In recent years several formal BPR CASES and other computer-aided design tools have been employed to support the task of creating structure/process flow diagrams and modeling an organization’s data. Further, as companies achieve success and failure in this process, many stages in the whole BPR process have become identified.

 

2. The need for business process reengineering for the public sector general insurance companies:

To achieve improved turnaround times, high-quality results and rapid scaling of processes while realizing significantly lower costs of operations in the current scenario where there is a greater scrutiny on operational and IT costs, all four Public Sector General Insurance companies are sincere now to improve their competitiveness, giving absolute focus on the right business and technology initiatives, and execute them cost-effectively.

 

The customers’ awareness and expectations levels have increased. They have become more demanding. Hence, to provide better services and customer satisfaction PSU General Insurers need to change the way they function. Inadequate business growth of the Company limits the growth opportunities of the employees. In the changing market scenario, the aspirations of the employees have to be met by enabling them to realize their full potential and provide suitable growth opportunities.

 

In the present set up of Public Sector General Insurance Companies, each operating office is working as a small insurance company comprising full fledge operations including underwriting, claim settlement as well as all main back-office activities such as the generation of statements or accounting. This type of set-up has got certain plus points like easy access of clients to offices with full operations, direct interaction with clients, etc. But at the same time, this type of set up has resulted in huge differences in productivity amongst various operating offices within the same region.

 

Data captured through their system software (as well as with their out-sourced IT agencies) in respect of policies issued and claims settled was analyzed and it was found that whereas the productivity of certain offices was more than double the average productivity, there were offices which productivity was less than half of the average productivity (analyzing the same data set for all offices in India live on through systems’ data compilation it was found that the differences in productivity were even larger).

 

There may be various reasons explaining sub-par productivity, however, the fact remains that resources are not efficiently utilized. It was also found in one of the PSUs that the average time taken for settlement of small own damage claims up to Rs. 35,000 is significantly higher than 30 days.

 

The average time taken for settlement of motor own damage claims was 110 days. In a competitive environment like today’s insurance business in India, this type of delay in claim settlement leads to negative publicity and hurts client satisfaction and premium growth. It was also found that on an average, the time spent in operating offices on sales and channel development is only 15% and the balance 85% time is spent on other activities like underwriting, claim settlement, statements, account, record keeping and other activities like personnel, estate & establishment, etc.

 

Insurance is a marketing activity and operating offices should be relieved of certain of these activities to enable them to devote more time for sales and channel development. Even the Central Government has instructed these four PSUs to start the vivid implement process, monitor the progress and submit reports at regular intervals.

 

To take care of the above problem it was decided within various project meeting groups and steering committee meetings of these public-sector general insurance companies to take out certain functions performed in operating offices and give them to separate centralized claim settling office called ‘Service Center’ in order as to free all the in-charges of the operating offices from these non-marketing activities and allow them to concentrate on marketing. It is felt that this will also result in improved resource allocation due to scale effect and better process control.

 

A service center is a specialized office that will be able to settle claims faster for the offices attached to it since a team of dedicated officers will deal with these claims with a target time given to them to settle claims. It will also help in developing expertise in respect of underwriting and claims.

 

Basic objectives of these centralized hubs are mainly –

  • To increase client focus;
  • To reduce man-hours spent on non-customer centric jobs.
  • The employees (in the operating offices) who were engaged with the work entrusted to ‘Claims Hub’ will be freed from their usual work now (as the same work being shouldered by the centralized hubs), will be diverted and utilized in the Customers’ Facilitation Units proposed to start at local levels in various operating offices. The purpose of these Customers’ Facilitation Units is to assist the agents in discharging their marketing service and also to train these agents in their various operational areas.

Every PSU General Insurance Companies’ Information Technology (IT) Department needs to consider the following IT aspects:

  • Grant permission for remote accessing of operating office database to record claim intimation and claim settlement approval through a special user ID.
  • Incorporate IRDA approved category-wise surveyor’s panel in the Global Master duly short-listed by the competent authority and enlisting and delisting the surveyor’s name based on their professional qualification work performance and dilatory or dishonest p.
  • Provide the patch for withdrawal of the facility for claim intimation/approval/surveyor deputation in respect of claims at the operating office level.

3. Various stages of preparation:

The stages and the related questions for evaluation of the specific stages need to address are given below in seriatim:

 

Stage: Preparation: What is the level of organizational commitment? What are the expectations? What are the project goals? Who should be on the team? What are the required skill sets? How will the results be communicated to the organization? Whether to go for a central server or to depend on Local area Network? What measures are to be taken for disastrous situations to control, back-up methods & related recovery of data?

 

Stage: Identification: What are the major business processes? How do these processes interact with customer and supplier processes? What are the strategic processes? What are business breakpoints? What processes should be re-engineered within 90 days, within one year or thereafter -within which period?

 

Stage Vision: Being interested to be the most preferred choice of customers for availing their general insurance requirements insurers need to build relationships with the customers, intermediaries, employees. Strategic Initiatives on:

  • Financial focus areas – Initiatives which focus on growth, profitability, solvency, etc
  • Customer focus areas – Initiatives which focus on customer services, building relationships with customers, channel partners, etc
  • Human and organizational focus areas – Initiatives which focus on employee growth, organization structure, etc
  • Internal Business focus areas – Initiatives which focus on our internal business processes
  • Their organizational structure and the work culture whether needs a serious relook
  • The working ethos of various offices requires being customer-oriented.
  • Employees should have a feeling of pride and belongingness for the organization that needs to be reinforced.
  • The organization needs to be more aware of the current market realities and practices. The attitude of the operating offices will have to be developed further to keep pace with our competitors.

What are the sub-processes, activities, and steps that make up the major business processes? How do resources, information, and workflow through each process operate? Why do we do the things we do now (getting out of the box or mental confinement)? What are the underlying business and technology assumptions? Are there ways to achieve business goals that seem impossible today and we should Dare to Dream? What are the boundaries between business processes and key business partners (intermediaries, suppliers, customers, etc.)? How might these boundaries be redefined to improve overall performance? What are key benchmarking measures for measuring performance against “best of breed”? What are the specific improvement goals? What is the vision and strategy for change? How best can associates collaborate in the process and share the vision and strategy for change?

 

Stage: Preparation of feasibility Reports:

  • Strategy report (including HR recommendations) – which talks about what we wish to achieve
  • Process report – which talks about how we wish to achieve our objectives
  • IT report – which talks about using the latest technology to implement our improved processes to achieve our objectives
  • Stage: Solution: Technical Design: What are the required technical resources and technologies needed in the reengineered process?
  • Stage: Solution: Social Design: What are the required human resources? What immediate, near term and long-range opportunities exist? How will responsibilities change? What training programs will be required? Who is most likely to resist change? How can they be motivated to accept or participate in this change? What will the new organization look like?
  • Stage: Configuration of all the applications by domain specialists: What methodologies/application soft wares (like Core business, financials (including premium collections, settlement of claims, commission, cash to order / petty cash/ other cash flow requirements), human resource management, customer relationship management), will suit in configuring various core business (Insurance) applications?
  • Stage: Users application test: How to prepare/ provide test data for conducting user acceptance tests? Who will evaluate the result and suggest changes?
  • Stage: Transformation: How and when should progress be monitored? How should unanticipated problems be handled? How is the momentum for continuous change sustained? Who will work as the change agent and how the change requests are to register? Finally, who will escalate and validate change requests for functional domain requirements?
  • BPR is an on-going process critical to an organization’s success in a competitive market place. I suspect that if I ask most executives of mid to large local corporations if they are using BPR, they will answer in the affirmative. But I know from experience that if I dig deeper, what I will find is that what they term reengineering is usually a combination of incremental advances in information technology (a new client/server system, a new network, a new software package, a new “Director of Strategic Planning”) and market opportunism (getting a new government or overseas contract, expanding an already profitable area of their business, etc.).

4. Most common shortcomings in the BPR process:

The following are the most common shortcomings I have observed in the existing world (apart from some other sources of failures related to would-be process requirements):

Failure Point 1: Spending megabucks on new technology while giving little or no thought of changing the organization’s underlying business processes. The later is often far more difficult since it involves invading political turfs and soul searching by the company’s key executives.

Failure Point 2: Delegating the task of reengineering to an outside consulting firm. Usually, this firm has a little or no track record in reengineering or industry-specific experience. The outside firm is a sort of “crutch”, relieving the organization from the sometimes arduous but always rewarding task of empowering and involving their employees at all levels in the reengineering process. Often this outside firm is used to help them in making the technology decision, a task they are usually only marginally qualified for.

Failure Point 3: On the other hand, involving the right outside consulting firm can be critical in breaking down organizational barriers and providing a fresh, presumably objective organizational assessment. The outside firm can also facilitate team building which is critical to sustaining the reengineering process. All too many companies will tell me that they know their problems, so why bring in an outside firm. But do they know their problems? Have they developed a clear methodology to address their reengineering needs?

Failure Point 4: Inability to identify key breakpoints in core business processes. Breakpoints are defined as the achievement of excellence in one or more value metrics where the marketplace recognizes the advantage, and where the ensuing result is a disproportionate and sustained increase in the supplier’s market share.

Failure Point 5: Another common error I see is that most companies fail to commit the resources, internal or external to the task. Their key executives are so busy with the “putting out fires” incidents; they think they don’t have time to address BPR planning needs. The key term here is “thinking”. BPR often addresses the most “screwed up” processes of a company. If these are not addressed they fester and can mean ultimate disaster.

Failure Point 6: Data Migration Strategies – the Business Reengineered Project Failure as Norm? The General Insurance Companies which are operating for a considerable period is concerned with the Data Migration issues, whenever they opt for any new Core Insurance Solutions. Our records require to be kept for ten years. When the company goes for a new platform of core insurance operations it requires that all data be uploaded from the earlier operating system to a new system.

Here come the issues related to Data Migration. “Not only time but company credibility is regarded as a corporate asset. Running the risk of project default and/or cancellation is all too commonplace, and extensive investment in refining insufficient data facilities is costly and counterproductive.

This approach alone can jeopardize a company’s timetable and competitiveness.”- said Jim Johnson, Chairman, The Standish Group. Unfortunately, most data migration projects don’t go as smoothly as anticipated. According to The Standish Group, in 1998, 74 percent of all IT projects either overran or failed, resulting in almost $100 billion in unexpected costs.

Of the 15,000 data migration projects that are starting in 1999, as many as 88 percent will either overrun or fail. One of the primary reasons for this extraordinary failure rate is the lack of a thorough understanding of the source data early on in these projects.

Conventional approaches to data profiling and migration can create nearly as many problems as they resolve data not loading properly, poor quality data and compounded inaccuracies, time and cost overruns and, in extreme cases, late-stage project cancellations. The adage “garbage in, garbage out” is the watchword here.

The Data Migration requirement for Core Application is for all PSU Companies is to be addressed on the proposal for Data Migration solution in BPR Exercises. The challenges and complexities of Data Migration if migrating erstwhile system data structure into the data structure of the new Core System required to be implemented.

5. Important issues for achieving success inefficient data migration:

1. Understanding Your Data: The First Essential Step-

Before undertaking large scale legacy-to-x application data migrations, data analysts need to learn as much as possible about the data they plan to move. If, for example, you have a variety of pegs that you’d like to fit into round holes, it’s best to find out which ones are already round and which ones are square, triangular, rectangular, etc. Considering the magnitude of an organization’s data, the task of the data analyst to obtain this knowledge is overwhelming, to say the least.

 

IT organizations can begin their data analysis by implementing a two-step process: data profiling and mapping. Data profiling involves studying the source data thoroughly to understand its content, structure, quality, and integrity. Once the data has been profiled, an accurate set of mapping specifications can be developed based on this profile a process called data mapping.

 

The combination of data profiling and mapping comprises the essential first step in any successful data migration project and should be completed before attempting to extract, scrub, transform and load the data into the target database.

 

Mandatory Data Migration Service for each insurer will ensure the data migration as-is-basis into the new data structure except data type changes, data length changes, and default values. The scope of this process requires:

  • Data Migration Requirement Study;
  • Understanding of Target Templates;
  • Data Structure Mapping;
  • Script Development;
  • Script Testing with Operating Office Data Base;
  • Data Consolidation;
  • User Application Test (UAT) with the Operating Office Databases;
  • Data Validation with Exception Reporting;
  • Data de-duplication;
  • Post UAT Customer Requirement Services.

2. Conventional Techniques in Data Migration: Problems and Pitfalls:

 

The conventional approach to data profiling and mapping starts with a large team of people (mainly consisting of data and business analysts, data administrators, database administrators, system designers, subject matter experts, etc.).

 

These people meet in a series of joint application development (JAD) sessions and attempt to extract useful information about the content and structure of the legacy data sources by examining outdated documentation, COBOL copybooks, and inaccurate Metadata and, in some cases, the physical data itself.

 

Typically, this is a very labor-intensive process supplemented, in some cases, by semi-automated query techniques. Profiling legacy data in this way is extremely complex, time-intensive and error-prone. Once the process is complete, only a limited understanding of the source data is achieved.

 

At that point, according to the project flow chart, the data analyst moves on to the mapping phase. However, since the source data is so poorly understood and inferences about it are largely based on assumptions rather than facts, this phase typically results in an inaccurate data model and set of mapping specifications. Based on this information, the data is extracted, scrubbed, transformed and loaded into the new database.

 

Not surprisingly, in almost all cases, the new system doesn’t work correctly the first time. Then the rework process begins: redesigning, recoding, reloading and retesting. At best, the project incurs significant time and cost overruns. At worst, faced with runaway costs and no clear end in sight, senior management cancels the project, preferring to live with an inefficient but partially functional information system rather than incur the ongoing costs of an “endless” data migration project.

 

3. Strategies for Data Migration: 6 Steps to Preparing Your Data-

 

Data profiling and mapping consist of six sequential steps, three for data profiling and three for data mapping, with each step building on the information produced in the previous steps. The resulting transformation maps, in turn, can be used in conjunction with third-party data migration tools to extract, scrub, transform and load the data from the old system to the new system.

 

Data sources are profiled in three dimensions: down columns (column profiling); across rows (dependency profiling); and tables (redundancy profiling).

 

Column Profiling: Column profiling analyzes the values in each column or field of source data, inferring detailed characteristics for each column, including data type and size, range of values, frequency and distribution of values, cardinality and null and uniqueness characteristics. This step allows analysts to detect and analyze data content quality problems and evaluate discrepancies between the inferred, true Metadata and the documented Metadata.

 

Dependency Profiling: Dependency profiling analyzes data across rows comparing values in every column with values in every other column and infers all dependency relationships that exist between attributes within each table. This process cannot be accomplished manually. Dependency profiling identifies primary keys and whether or not expected dependencies (e.g., those imposed by a new application) are supported by the data. It also identifies “gray-area dependencies” those that are true most of the time, but not all of the time, and are usually an indication of a data quality problem.

 

Redundancy Profiling: Redundancy profiling compares data between tables of the same or different data sources, determining which columns contain overlapping or identical sets of values. It looks for repeating patterns among an organization’s “islands of information” billing systems, sales force automation systems, post-sales support systems, etc. Redundancy profiling identifies attributes containing the same information but with different names (synonyms) and attributes that have the same name but different business meanings (homonyms). It also helps determine which columns are redundant and can be eliminated and which are necessary to connect information between tables.

 

Redundancy profiling eliminates processing overhead and reduces the probability of error in the target database. As with dependency profiling, this process cannot be accomplished manually.

Once the data profiling process is finished, the profile results can be used to complete the remaining three data mapping steps of a migration project: normalization, model enhancement, and transformation mapping.

Normalization: By building a fully normalized relational model based on and fully supported by the consolidation of all the data, the data model will not fail.

 

Model Enhancement: This process involves modifying the normalized model by adding structures to support new requirements or by adding indexes and denormalizing the structures to enhance performance.

 

Transformation Mapping: Once the data model modifications are complete, a set of transformation maps can be created to show the relationships between columns in the source files and tables in the enhanced model, including attribute-to-attribute flows. Ideally, these transformation maps facilitate the capture of scrubbing and transformation requirements and provide essential information to the programmers creating conversion routines to move data from the source to the target database.

 

Developing an accurate profile of existing data sources is the essential first step in any successful data migration project. By executing a sound data profiling and mapping strategy, small, focused teams of technical and business users can quickly perform the highly complex tasks necessary to achieve a thorough understanding of source data a level of understanding that simply cannot be achieved through conventional processes and semi-automated query techniques.

 

The requirement of better decisions:

 

By following these six steps in data profiling and mapping, companies can take their data migration projects to complete successfully the first time around eliminating extensive design rework and late-stage project cancellations. A good data profiling and mapping methodology will take into account the entire scope of a project, even warning IT management if the business objectives of the project are not supported by the data.

 

Data profiling and mapping, if done correctly, can dramatically lower project risk, enabling valuable resources to be redirected to other, more fruitful projects. Finally, it will deliver higher data and application quality, resulting in more informed business decisions, which typically translate into greater revenues and profits.

Despite today’s information explosion, business leaders are operating with bigger blind spots. According to a recent IBM Global Business Services survey of 225 business executives, one out of every three is making major decisions with incomplete or untrustworthy information.

 

But they can’t fall back on traditional methods of decision making. Experience and intuition aren’t sufficient when confronting the far-reaching changes driven by macroeconomic upheaval and the familiar forces of a world shrinking and flattening. Executives recognize that new analytics capabilities, coupled with advanced business process management, signal a major opportunity to create a business advantage. Those who have the vision to apply new approaches are building intelligent enterprises positioned to thrive.

 

The requirement to face the challenges and complexities of Data Migration:

 

Since earlier data structure may not match exactly while migrating into the data structure of the new Core System selected the following issues to require to be addressed:

  • The difference in the structure of control numbers/ master codes;
  • De-duplication from source data (Master Data);
  • Segregation of combined source data structure to multiple data items;
  • Premium mismatch, if any, during renewal/ endorsement;
  • Mapping of operational data items in source to mandatory data items in the target;
  • Data-type mismatch;
  • Larger to smaller target size mismatch;
  • Derived target data using complex logic involving a large number of scattered source data items;
  • Data cleaning of various fields always requires knowledge of the original proposal;
  • Various office-specific issues/ cases of complicated policies;
  • Policies issued with inadequate data, e.g. general policy/endorsement;
  • Source endorsement data structure;
  • Aggregated data item in target from scattered data items;
  • Rounding off issues arising during aggregation;
  • One composite source item to multiple target items;
  • Schedule Policies with specific annexure;
  • Package policies involving multiple Line of Business (LOBs);
  • Data identification after drilling down to multiple levels, etc.

6. Summing up:

 

In summary, common mistakes are identified:

(1) An unclear definition just what is BPR involves and the point of focus of the process change;

(2) Unrealistic expectations at different levels;

(3) Inadequate resources;

(4) Taking a too long time for implementation and BPR should also produce tangible results within realistic timeframes;

(5) Lack of sponsorship/shift of business focus;

(6) Wrong scope (either too narrow or too wide);

(7) Too great (or too little) reliance on new information technology adopted;

(8) Lack of an effective methodology;

(9) Fixing responsibility & related accountability in various spheres;

(10) Integration of various programs, systems, process and application software like CRM, HRMS, Financials, Core Insurance, document management.

(11) Not taking adequate measures to fulfill proper training requirements for all concerned. 

 

The basic parameters to check constantly, consistently, continuously and considerably are improved turnaround times, high-quality results and rapid scaling of processes. Measurement of these parameters should be made in-built is the process – so that it will not only help the internal audit department of the organization but also help the Corporate Management the instant ready recognizer to check the performance. Human calculations not only give erroneous data – very often it is misleading.


By Mr. Anabil Bhattacharya, B.M.E. (Hons.), F.I.I.I., Published in The Insurance Times, January 2010

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.