The data warehouse ETL toolkit : practical techniques for extracting, cleaning, conforming, and delivering data / Ralph Kimball y Joe Caserta.
Tipo de material: TextoIdioma: Inglés Editor: Indianapolis, IN : Distribuidor: Wiley, Fecha de copyright: ©2004Edición: 1a ediciónDescripción: xxxiv, 491 p. : ilustraciones ; 24 x 19 cmTipo de contenido:- texto
- sin medio
- volumen
- 9780764567575
- 005.74 22
- QA 76 .9 .D37 K53 2004
Tipo de ítem | Biblioteca actual | Biblioteca de origen | Colección | Signatura topográfica | Copia número | Estado | Notas | Fecha de vencimiento | Código de barras | Reserva de ítems | |
---|---|---|---|---|---|---|---|---|---|---|---|
Libros para consulta en sala | Biblioteca Antonio Enriquez Savignac | Biblioteca Antonio Enriquez Savignac | COLECCIÓN RESERVA | QA 76 .9 .D37 K53 2004 (Navegar estantería(Abre debajo)) | Ejem.1 | No para préstamo (Préstamo interno) | Ingeniería en Datos e Inteligencia Organizacional | 042764 |
Navegando Biblioteca Antonio Enriquez Savignac estanterías, Colección: COLECCIÓN RESERVA Cerrar el navegador de estanterías (Oculta el navegador de estanterías)
QA 76 .9 .D35 S27 Foundations of multidimensional and metric data structures / | QA 76 .9 .D37 G62 2009 Data warehouse design : modern principles and methodologies / | QA 76 .9 .D37 I4575 2015 Data architecture : a primer for the data scientist : big data, data warehouse and data vault / | QA 76 .9 .D37 K53 2004 The data warehouse ETL toolkit : practical techniques for extracting, cleaning, conforming, and delivering data / | QA 76.9.D37 K55 2016 The Kimball Group reader : relentlessly practical tools for data warehousing and business intelligence / | QA 76.9.D37 K75 2013 Data warehousing in the age of big data / | QA 76 .9 .D37 L33 2011 The data warehouse mentor : practical data warehouse and business intelligence insights / |
Incluye índice.
I. Requirements, Realities, and Architecture --
1. Surrounding the Requirements --
1.1. Requirements --
1.1.1. Business Needs --
1.1.2. Compliance Requirements --
1.1.3. Data Profiling --
1.1.4. Security Requirements --
1.1.5. Data Integration --
1.1.6. Data Latency --
1.1.7. Archiving and Lineage --
1.1.8. End User Delivery Interfaces --
1.1.9. Available Skills --
1.1.10. Legacy Licenses --
1.2. Architecture --
1.2.1. ETL Tool versus Hand Coding (Buy a Tool Suite or Roll Your Own?) --
1.2.2. The Back Room – Preparing the Data --
1.2.3. The Front Room – Data Access --
1.3. The Mission of the Data Warehouse --
1.3.1. What the Data Warehouse Is --
1.3.2. What the Data Warehouse Is Not --
1.3.3. Industry Terms Not Used Consistently --
1.3.4. Resolving Architectural Conflict: The Hybrid Bus Approach --
1.3.5. How the Data Warehouse Is Changing --
1.4. The Mission of the ETL Team --
2. ETL Data Structures --
2.1. To Stage or Not to Stage --
2.2. Designing the Staging Area --
2.3. Data Structures in the ETL System --
2.3.1. Flat Files --
2.3.2. XML Data Sets --
2.3.3. Relational Tables --
2.3.4. Independent DBMS Working Tables --
2.3.5. Third Normal Form Entity/Relation Models --
2.3.6. Nonrelational Data Sources --
2.3.7. Dimensional Data Models: The Handoff from the Back Room to the Front Room --
2.3.8. Fact Tables --
2.3.9. Dimension Tables --
2.3.10. Atomic and Aggregate Fact Tables --
2.3.11. Surrogate Key Mapping Tables --
2.4. Planning and Design Standards --
2.4.1. Impact Analysis --
2.4.2. Metadata Capture --
2.4.3. Naming Conventions --
2.4.4. Auditing Data Transformation Steps --
2.5. Summary --
II. Data Flow --
3. Extracting --
3.1. Part 1: The Logical Data Map --
3.1.1. Designing Logical Before Physical --
3.2. Inside the Logical Data Map --
3.2.1. Components of the Logical Data Map --
3.2.2. Using Tools for the Logical Data Map --
3.3. Building the Logical Data Map --
3.3.1. Data Discovery Phase --
3.3.2. Data Content Analysis --
3.3.3. Collecting Business Rules in the ETL Process --
3.4. Integrating Heterogeneous Data Sources --
3.4.1. Part 2: The Challenge of Extracting from Disparate Platforms --
3.4.2. Connecting to Diverse Sources through ODBC --
3.5. Mainframe Sources --
3.5.1. Working with COBOL Copybooks --
3.5.2. EBCDIC Character Set --
3.5.3. Converting EBCDIC to ASCII --
3.5.4. Transferring Data between Platforms --
3.5.5. Handling Mainframe Numeric Data --
3.5.6. Using PICtures --
3.5.7. Unpacking Packed Decimals --
3.5.8. Working with Redefined Fields --
3.5.9. Multiple OCCURS --
3.5.10. Managing Multiple Mainframe Record Type Files --
3.5.11. Handling Mainframe Variable Record Lengths --
3.6. Flat Files --
3.6.1. Processing Fixed Length Flat Files --
3.6.2. Processing Delimited Flat Files --
3.7. XML Sources --
3.7.1. Character Sets --
3.7.2. XML Meta Data --
3.8. Web Log Sources --
3.8.1. W3C Common and Extended Formats --
3.8.2. Name Value Pairs in Web Logs --
3.9. ERP System Sources --
3.10. Part 3: Extracting Changed Data --
3.10.1. Detecting Changes --
3.10.2. Extraction Tips --
3.10.3. Detecting Deleted or Overwritten Fact Records at the Source --
3.11. Summary --
4. Cleaning and Conforming --
4.1. Defining Data Quality --
4.2. Assumptions --
4.3. Part 1: Design Objectives --
4.3.1. Understand Your Key Constituencies --
4.3.2. Competing Factors --
4.3.3. Balancing Conflicting Priorities --
4.3.4. Formulate a Policy --
4.4. Part 2: Cleaning Deliverables --
4.4.1. Data Profiling Deliverable --
4.4.2. Cleaning Deliverable #1: Error Event Table --
4.4.3. Cleaning Deliverable #2: Audit Dimension --
4.4.4. Audit Dimension Fine Points --
4.5. Part 3: Screens and Their Measurements --
4.5.1. Anomaly Detection Phase --
4.5.2. Types of Enforcement --
4.5.3. Column Property Enforcement --
4.5.4. Structure Enforcement --
4.5.5. Data and Value Rule Enforcement --
4.5.6. Measurements Driving Screen Design --
4.5.7. Overall Process Flow --
4.5.8. The Show Must Go On—Usually --
4.5.9. Screens --
4.5.10. Known Table Row Counts --
4.5.11. Column Nullity --
4.5.12. Column Numeric and Date Ranges --
4.5.13. Column Length Restriction --
4.5.14. Column Explicit Valid Values --
4.5.15. Column Explicit Invalid Values --
4.5.16. Checking Table Row Count Reasonability --
4.5.17. Checking Column Distribution Reasonability --
4.5.18. General Data and Value Rule Reasonability --
4.6. Part 4: Conforming Deliverables --
4.6.1. Conformed Dimensions --
4.6.2. Designing the Conformed Dimensions --
4.6.3. Taking the Pledge --
4.6.4. Permssible Variation of Conformed Dimensions --
4.6.5. Conformed Facts --
4.6.6. The Fact Table Provider --
4.6.7. The Dimension Manager: Publishing Conformed Dimensions to Affected Fact Tables --
4.6.8. Detailed Delivery Steps for Conformed Dimensions --
4.6.9. Implementing the Conforming Modules --
4.6.10. Matching Drives Deduplication --
4.6.11. Surviving: Final Step of Conforming --
4.6.12. Delivering --
4.7. Summary --
5. Delivering Dimension Tables --
5.1. The Basic Structure of a Dimension --
5.2. The Grain of a Dimension --
5.3. The Basic Load Plan for a Dimension --
5.4. Flat Dimensions and Snowflaked Dimensions --
5.5. Date and Time Dimensions --
5.6. Big Dimensions --
5.7. Small Dimensions --
5.8. One Dimension or Two --
5.9. Dimensional Roles --
5.10. Dimensions as Subdimensions of Another Dimension --
5.11. Degenerate Dimensions --
5.12. Slowly Changing Dimensions --
5.13. Type 1 Slowly Changing Dimension (Overwrite) --
5.14. Type 2 Slowly Changing Dimension (Partitioning History) --
5.15. Precise Time Stamping of a Type 2 Slowly Changing Dimension --
5.16. Type 3 Slowly Changing Dimension (Alternate Realities) --
5.17. Hybrid Slowly Changing Dimensions --
5.18. Late-Arriving Dimension Records and Correcting Bad Data --
5.19. Multivalued Dimensions and Bridge Tables --
5.20. Ragged Hierarchies and Bridge Tables --
5.21. Technical Note: POPULATING HIERARCHY BRIDGE TABLES --
5.22. Using Positional Attributes in a Dimension to Represent Text Facts --
5.23. Summary --
6. Delivering Fact Tables --
6.1. The Basic Structure of a Fact Table --
6.2. Guaranteeing Referential Integrity --
6.3. Surrogate Key Pipeline --
6.3.1. Using the Dimension Instead of a Lookup Table --
6.4. Fundamental Grains --
6.5. Transaction Grain Fact Tables --
6.5.1. Periodic Snapshot Fact Tables --
6.5.2. Accumulating Snapshot Fact Tables --
6.6. Preparing for Loading Fact Tables --
6.6.1. Managing Indexes --
6.6.2. Managing Partitions --
6.6.3. Outwitting the Rollback Log --
6.6.4. Loading the Data --
6.6.5. Incremental Loading --
6.6.6. Inserting Facts --
6.6.7. Updating and Correcting Facts --
6.6.8. Negating Facts --
6.6.9. Updating Facts --
6.6.10. Deleting Facts --
6.6.11. Physically Deleting Facts --
6.6.12. Logically Deleting Facts --
6.7. Factless Fact Tables --
6.8. Augmenting a Type 1 Fact Table with Type 2 History --
6.9. Graceful Modifications --
6.10. Multiple Units of Measure in a Fact Table --
6.11. Collecting Revenue in Multiple Currencies --
6.12. Late Arriving Facts --
6.13. Aggregations --
6.13.1. Design Requirement #1 --
6.13.2. Design Requirement #2 --
6.13.3. Design Requirement #3 --
6.13.4. Design Requirement #4 --
6.13.5. Administering Aggregations, Including Materialized Views --
6.14. Delivering Dimensional Data to OLAP Cubes --
6.14.1. Cube Data Sources --
6.14.2. Processing Dimensions --
6.14.3. Changes in Dimension Data --
6.14.4. Processing Facts --
6.14.5. Integrating OLAP Processing into the ETL System --
6.14.6. OLAP Wrap-up --
6.15. Summary --
III. Implementation and Operations --
7. Development --
7.1. Current Marketplace ETL Tool Suite Offerings --
7.2. Current Scripting Languages --
7.3. Time Is of the Essence --
7.3.1. Push Me or Pull Me --
7.3.2. Ensuring Transfers with Sentinels --
7.3.3. Sorting Data during Preload --
7.3.4. Sorting on Mainframe Systems --
7.3.5. Sorting on Unix and Windows Systems --
7.3.6. Trimming the Fat (Filtering) --
7.3.7. Extracting a Subset of the Source File Records on Mainframe Systems --
7.3.8. Extracting a Subset of the Source File Fields --
7.3.9. Extracting a Subset of the Source File Records on Unix and Windows Systems --
7.3.10. Extracting a Subset of the Source File Fields --
7.3.11. Creating Aggregated Extracts on Mainframe Systems --
7.3.12. Creating Aggregated Extracts on UNIX and Windows Systems --
7.4. Using Database Bulk Loader Utilities to Speed Inserts --
7.4.1. Preparing for Bulk Load --
7.5. Managing Database Features to Improve Performance --
7.5.1. The Order of Things --
7.5.2. The Effect of Aggregates and Group Bys on Performance --
7.5.3. Performance Impact of Using Scalar Functions --
7.5.4. Avoiding Triggers --
7.5.5. Overcoming ODBC the Bottleneck --
7.5.6. Benefiting from Parallel Processing --
7.6. Troubleshooting Performance Problems --
7.7. Increasing ETL Throughput --
7.7.1. Reducing Input/Output Contention --
7.7.2. Eliminating Database Reads/Writes --
7.7.3. Filtering as Soon as Possible --
7.7.4. Partitioning and Parallelizing --
7.7.5. Updating Aggregates Incrementally --
7.7.6. Taking Only What You Need --
7.7.7. Bulk Loading/Eliminating Logging --
7.7.8. Dropping Databases Constraints and Indexes --
7.7.9. Eliminating Network Traffic --
7.7.10. Letting the ETL Engine Do the Work --
7.8. Summary --
8. Operations --
8.1. Scheduling and Support --
8.1.1. Reliability, Availability, Manageability Analysis for ETL --
8.1.2. Etl Scheduling 101 --
8.1.3. Scheduling Tools --
8.1.4. Load Dependencies --
8.1.5. Metadata --
8.2. Migrating to Production --
8.2.1. Operational Support for the Data Warehouse --
8.2.2. Bundling Version Releases --
8.2.3. Supporting the ETL System in Production --
8.3. Achieving Optimal ETL Performance --
8.3.1. Estimating Load Time --
8.3.2. Vulnerabilities of Long-Running ETL processes --
8.3.3. Minimizing the Risk of Load Failures --
8.4. Purging Historic Data --
8.5. Monitoring the ETL System --
8.5.1. Measuring ETL Specific Performance Indicators --
8.5.2. Measuring Infrastructure Performance Indicators --
8.5.3. Measuring Data Warehouse Usage to Help Manage ETL Processes --
8.6. Tuning ETL Processes --
8.6.1. Explaining Database Overhead --
8.7. ETL System Security --
8.7.1. Securing the Development Environment --
8.7.2. Securing the Production Environment --
8.8. Short-Term Archiving and Recovery --
8.9. Long-Term Archiving and Recovery --
8.9.1. Media, Formats, Software, and Hardware --
8.9.2. Obsolete Formats and Archaic Formats --
8.9.3. Hard Copy, Standards, and Museums --
8.9.4. Refreshing, Migrating, Emulating, and Encapsulating --
8.10. Summary --
9. Metadata --
9.1. Defining Metadata --
9.1.1. Metadata—What Is It? --
9.1.2. Source System Metadata --
9.1.3. Data-Staging Metadata --
9.1.4. DBMS Metadata --
9.1.5. Front Room Metadata --
9.2. Business Metadata --
9.2.1. Business Definitions --
9.2.2. Source System Information --
9.2.3. Data Warehouse Data Dictionary --
9.2.4. Logical Data Maps --
9.3. Technical Metadata --
9.3.1. System Inventory --
9.3.2. Data Models --
9.3.3. Data Definitions --
9.3.4. Business Rules
9.4. ETL-Generated Metadata --
9.4.1. ETL Job Metadata --
9.4.2. Transformation Metadata --
9.4.3. Batch Metadata --
9.4.4. Data Quality Error Event Metadata --
9.4.5. Process Execution Metadata --
9.5. Metadata Standards and Practices --
9.5.1. Establishing Rudimentary Standards --
9.5.2. Naming Conventions --
9.6. Impact Analysis --
9.7. Summary --
10. Responsibilities --
10.1. Planning and Leadership --
10.1.1. Having Dedicated Leadership --
10.1.2. Planning Large, Building Small --
10.1.3. Hiring Qualified Developers --
10.1.4. Building Teams with Database Expertise --
10.1.5. Don't Try to Save the World --
10.1.6. Enforcing Standardization --
10.1.7. Monitoring, Auditing, and Publishing Statistics --
10.1.8. Maintaining Documentation --
10.1.9. Providing and Utilizing Metadata --
10.1.10. Keeping It Simple --
10.1.11. Optimizing Throughput --
10.2. Managing the Project --
10.2.1. Responsibility of the ETL Team --
10.2.2. Defining the Project --
10.2.3. Planning the Project --
10.2.4. Determining the Tool Set --
10.2.5. Staffing Your Project --
10.2.6. Project Plan Guidelines --
10.2.7. Managing Scope --
10.3. Summary --
IV. Real Time Streaming ETL Systems --
11. Real-Time ETL Systems --
11.1. Why Real-Time ETL? --
11.2. Defining Real-Time ETL --
11.3. Challenges and Opportunities of Real-Time Data Warehousingv--
11.4. Real-Time Data Warehousing Review --
11.4.1. Generation 1—The Operational Data Store --
11.4.2. Generation 2—The Real-Time Partition --
11.4.3. Recent CRM Trends --
11.4.4. The Strategic Role of the Dimension Manager --
11.5. Categorizing the Requirement --
11.5.1. Data Freshness and Historical Needs --
11.5.2. Reporting Only or Integration, Too? --
11.5.3. Just the Facts or Dimension Changes, Too? --
11.5.4. Alerts, Continuous Polling, or Nonevents? --
11.5.5. Data Integration or Application Integration? --
11.5.6. Point-to-Point versus Hub-and-Spoke --
11.5.7. Customer Data Cleanup Considerations --
11.6. Real-Time ETL Approaches --
11.6.1. Microbatch ETL --
11.6.2. Enterprise Application Integration --
11.6.3. Capture, Transform, and Flow --
11.6.4. Enterprise Information Integration --
11.6.5. The Real-Time Dimension Manager --
11.6.6. Microbatch Processing --
11.6.7. Choosing an Approach—A Decision Guide --
11.7. Summary --
12. Conclusions --
12.1. Deepening the Definition of ETL --
12.2. The Future of Data Warehousing and ETL in Particular --
12.2.1. Ongoing Evolution of ETL Systems --
*Cowritten by Ralph Kimball, the world's leading data warehousing authority, whose previous books have sold more than 150,000 copies.
*Delivers real-world solutions for the most time- and labor-intensive portion of data warehousing-data staging, or the extract, transform, load (ETL) process.
*Delineates best practices for extracting data from scattered sources, removing redundant and inaccurate data, transforming the remaining data into correctly formatted data structures, and then loading the end product into the data warehouse.
*Offers proven time-saving ETL techniques, comprehensive guidance on building dimensional structures, and crucial advice on ensuring data quality.