Open Issues Need Help
View All on GitHubAI Summary: This task involves building an ELT pipeline to process population and area data. The pipeline will extract data from specified sources, transform it (including geographical enrichment and density calculation), and load it into a data warehouse. The final output will be a 'Gold' level table with a unique geographical key and a density indicator, ready for use in the SINGA application.
AI Summary: The task involves enriching and transforming data related to services and facilities (e.g., post offices, schools) from various sources within an ELT (Extract, Load, Transform) data pipeline. This includes geo-enriching the data, selecting relevant activity sectors, creating a unique geo-code, and ultimately loading the transformed data into a 'Gold' layer for use in the J'Accueille application. The specific facility types to be processed are defined.
AI Summary: The task involves building the complete data integration pipeline for household data, transforming raw data from various sources (bronze layer) into a cleaned and consolidated format (silver layer), and finally into a target schema (gold layer) specified in a provided Google Drive document. This will utilize an ELT (Extract, Load, Transform) approach with a medallion architecture and involve using Python scripts, Docker Compose, PostgreSQL, and dbt for data transformation and loading.
AI Summary: The task involves creating a data transformation pipeline using dbt (data build tool) to process raw housing data (Bronze layer) into a refined dataset representing social housing (Silver/Gold layers). This requires understanding the existing ELT (Extract, Load, Transform) pipeline, the project's dbt configuration, and the provided documentation and target table specifications. The goal is to match the structure and content of a target table, using a provided methodology as a guide.