Wednesday, May 7, 2014

ETL - Data Cleansing Overview & Definition

Wikipedia
Data cleansing, data cleaning or data scrubbing is the process of detecting and correcting (or removing) corrupt or inaccurate records from a record set, table, or database. Used mainly in databases, the term refers to identifying incomplete, incorrect, inaccurate, irrelevant, etc. parts of the data and then replacing, modifying, or deleting this dirty data.

After cleansing, a data set will be consistent with other similar data sets in the system. The inconsistencies detected or removed may have been originally caused by user entry errors, by corruption in transmission or storage, or by different data dictionary definitions of similar entities in different stores.

Data cleansing differs from data validation in that validation almost invariably means data is rejected from the system at entry and is performed at entry time, rather than on batches of data.

The actual process of data cleansing may involve removing typographical errors or validating and correcting values against a known list of entities. The validation may be strict (such as rejecting any address that does not have a valid postal code) or fuzzy (such as correcting records that partially match existing, known records).

Some data cleansing solutions will clean data by cross checking with a validated data set. Also data enhancement, where data is made more complete by adding related information, is a common data cleansing practice. For example, appending addresses with phone numbers related to that address.

Data cleansing may also involve activities like, harmonization of data, and standardization of data. For example, harmonization of short codes (St, rd etc.) to actual words (street, road). Standardization of data is a means of changing a reference data set to a new standard, ex, use of standard codes.


#Data Quality
High-quality data needs to pass a set of quality criteria. Those include:

  • Validity: The degree to which the measures conform to defined business rules or constraints (see also Validity (statistics). When modern database technology is used to design data-capture systems, validity is fairly easy to ensure: invalid data arises mainly in legacy contexts (where constraints were not implemented in software) or where inappropriate data-capture technology was used (e.g., spreadsheets, where it is very hard to limit what a user chooses to enter into a cell).Data constraints fall into the following categories:
  • Data-Type Constraints – e.g., values in a particular column must be of a particular datatype, e.g., Boolean, numeric (integer or real), date, etc.
  • Range Constraints: typically, numbers or dates should fall within a certain range. That is, they have minimum and/or maximum permissible values.
  • Mandatory Constraints: Certain columns cannot be empty.
  • Unique Constraints: A field, or a combination of fields, must be unique across a dataset. For example, no two persons can have the same social security number.
  • Set-Membership constraints: The values for a column come from a set of discrete values or codes. For example, a person's gender may be Female, Male or Unknown (not recorded).
  • Foreign-key constraints: This is the more general case of set membership. The set of values in a column is defined in a column of another table that contains unique values. For example, in a US taxpayer database, the "state" column is required to belong to one of the US's defined states or territories: the set of permissible states/territories is recorded in a separate States table. The term foreign key is borrowed from relational database terminology.
  • Regular expression patterns: Occasionally, text fields will have to be validated this way. For example, phone numbers may be required to have the pattern (999) 999-9999.
  • Decleansing is detecting errors and syntactically removing them for better programming.
  • Cross-field validation: Certain conditions that utilize multiple fields must hold. For example, in laboratory medicine, the sum of the components of the differential white blood cell count must be equal to 100 (since they are all percentages). In a hospital database, a patient's date of discharge from hospital cannot be earlier than the date of admission.
  • Accuracy: The degree of conformity of a measure to a standard or a true value - see also Accuracy and precision. Accuracy is very hard to achieve through data-cleansing in the general case, because it requires accessing an external source of data that contains the true value: such "gold standard" data is often unavailable. Accuracy has been achieved in some cleansing contexts, notably customer contact data, by using external databases that match up zip codes to geographical locations (city and state), and also help verify that street addresses within these zip codes actually exist.
  • Completeness: The degree to which all required measures are known (see also Completeness). Incompleteness is almost impossible to fix with data cleansing methodology: one cannot infer facts that were not captured when the data in question was initially recorded. (In some contexts, e.g., interview data, it may be possible to fix incompleteness by going back to the original source of data, i,e., re-interviewing the subject, but even this does not guarantee success because of problems of recall - e.g., in an interview to gather data on food consumption, no one is likely to remember exactly what one ate six months ago. In the case of systems that insist certain columns should not be empty, one may work around the problem by designating a value that indicates "unknown" or "missing", but supplying of default values does not imply that the data has been made complete.
  • Consistency: The degree to which a set of measures are equivalent in across systems (see also Consistency). Inconsistency occurs when two data items in the data set contradict each other: e.g., a customer is recorded in two different systems as having two different current addresses, and only one of them can be correct. Fixing inconsistency is not always possible: it requires a variety of strategies - e.g., deciding which data were recorded more recently, which data source is likely to be most reliable (the latter knowledge may be specific to a given organization), or simply trying to find the truth by testing both data items (e.g., calling up the customer).
  • Uniformity: The degree to which a set data measures are specified using the same units of measure in all systems ( see also Unit of measure). In datasets pooled from different locales, weight may be recorded either in pounds or kilos, and must be converted to a single measure using an arithmetic transformation.
#Decleanse
  • Parsing: for the detection of syntax errors. A parser decides whether a string of data is acceptable within the allowed data specification. This is similar to the way a parser works with grammars and languages.
  • Data transformation: Data transformation allows the mapping of the data from its given format into the format expected by the appropriate application. This includes value conversions or translation functions, as well as normalizing numeric values to conform to minimum and maximum values.
  • Duplicate elimination: Duplicate detection requires an algorithm for determining whether data contains duplicate representations of the same entity. Usually, data is sorted by a key that would bring duplicate entries closer together for faster identification.
  • Statistical methods: By analyzing the data using the values of mean, standard deviation, range, or clustering algorithms, it is possible for an expert to find values that are unexpected and thus erroneous. Although the correction of such data is difficult since the true value is not known, it can be resolved by setting the values to an average or other statistical value. Statistical methods can also be used to handle missing values which can be replaced by one or more plausible values, which are usually obtained by extensive data augmentation algorithms.
#Challenges and problems
  • Error correction and loss of information: The most challenging problem within data cleansing remains the correction of values to remove duplicates and invalid entries. In many cases, the available information on such anomalies is limited and insufficient to determine the necessary transformations or corrections, leaving the deletion of such entries as a primary solution. The deletion of data, though, leads to loss of information; this loss can be particularly costly if there is a large amount of deleted data.
  • Maintenance of cleansed data: Data cleansing is an expensive and time-consuming process. So after having performed data cleansing and achieving a data collection free of errors, one would want to avoid the re-cleansing of data in its entirety after some values in data collection change. The process should only be repeated on values that have changed; this means that a cleansing lineage would need to be kept, which would require efficient data collection and management techniques.
  • Data cleansing in virtually integrated environments: In virtually integrated sources like IBM’s DiscoveryLink, the cleansing of data has to be performed every time the data is accessed, which considerably decreases the response time and efficiency.
  • Data-cleansing framework: In many cases, it will not be possible to derive a complete data-cleansing graph to guide the process in advance. This makes data cleansing an iterative process involving significant exploration and interaction, which may require a framework in the form of a collection of methods for error detection and elimination in addition to data auditing. This can be integrated with other data-processing stages like integration and maintenance.

Microsoft :
Data cleansing is the process of analyzing the quality of data in a data source, manually approving/rejecting the suggestions by the system, and thereby making changes to the data. Data cleansing in Data Quality Services (DQS) includes a computer-assisted process that analyzes how data conforms to the knowledge in a knowledge base, and an interactive process that enables the data steward to review and modify computer-assisted process results to ensure that the data cleansing is exactly as they want to be done.

The data steward can also perform data cleansing in the Integration Services packaging process. In this case, the data steward would use the DQS Cleansing component in Integration Services that automatically performs data cleansing using an existing knowledge base.



Tibco
Inappropriate, incorrect, duplicate, and missing data are prime examples of dirty data.

Dirty data contributes to inaccurate and unreliable results. If dirty data is used as the primary source for decision making, unforeseen critical errors can occur, predictive models become undependable, and calculations are less precise.

Once dirty data is detected, it has to be corrected. But while that’s taking place, managerial decisions are delayed, processes require re-evaluation and the work that’s contributed to generating the dirty data has to be reworked.

All this leads to wasted employee time, incorrect strategic decisions, and a decrease in the organization’s return on investment.


In the International Journal of Engineering Research and Applications (IJERA), author Sweety Patel identifies multiple ways data becomes dirty. Examples include:
  • Data’s been entered erroneously or data entry personnel are poorly trained.
  • System limitations or system configuration rules are applied inaccurately.
  • Scheduled data updates are neglected.
  • Duplicate records are not removed.
  • Lack of validation rules or rules are applied inconsistently.
  • Source to target mapping definitions are inaccurate.
Additionally, the IJERA article notes that when populating a data warehouse, the extraction, transformation and loading cycle (ETL) is the most important process to ensure that dirty data becomes clean.

During an interview, Milan Thakkar, a senior business intelligence engineer at Mindspark Interactive Inc., says he agrees with that sentiment. He reasons that all data is inherently prone to errors and suggests that during ETL data should be:

  1. Subjected to general statistical analysis. Evaluate new data against historical data for outliers. Mean, median, mode, standard deviation, range and other statistical methods can be applied. Confidence intervals should also be part of this analysis.
     
  2. Evaluated against a clustering algorithm. A clustering algorithm will also identify outliers and is usually significantly more complete then the general statistical analysis. Clustering can be used to evaluate an entire data set against itself by considering the Euclidean distance between records.
     
  3. Validated. Data integrity tests should be applied and then the data should be vetted against business rules. Check the data type to ensure that the data is appropriate for the column.
     
  4. Standardized. Data transformation rules should be used to ensure that the data format is consistent and the business logic is dependable and based on user requirements.
     
  5. Tracked. A metadata repository should be established to track the entire process including the data transformation, the process of vetting, and every method that’s used to analyze the data. Calculation formulas, data transformation algorithms, and business logic reason should be readily available.
As with any computer process, an ETL process has to be “told what to do” or programmed correctly. To further protect your organization against dirty data, Drew Rockwell recommends:
  • Dedicating resources to maintaining data integrity.
  • Embedding your analytics.
  • Not forcing an overarching schema.
  • Providing visibility into the origin and history of the data.
  • Thinking beyond Excel.
In general, in order to truly be protected against dirty data you must first be proactive by building automated processes to cleanse data during ETL and then applying the steps suggested by Rockwell.

1 comment:

  1. Data science course with placements in hyderabad
    we provide classroom training on IBM certified Data Science at hyderabad for the individuals who believe hand-held training. We teach as per the Indian Standard Time (IST) with In-depth practical Knowledge on each topic in classroom training, 80 – 90 Hrs of Real-time practical training classes. There are different slots available on weekends or weekdays according to your choices.

    ReplyDelete

Share Your Inspiration...