[job_card.job_description]Designing Hive / HCatalog data model includes creating table definitions, file formats, compression techniques for Structured & Semi-structured data processingImplementing Spark processing based ETL frameworksImplementing Big data pipeline for Data Ingestion, Storage, Processing & ConsumptionModifying the Informatica-Teradata & Unix based data pipelineEnhancing the Talend-Hive / Spark & Unix based data pipelinesDevelop and Deploy Scala / Python based Spark Jobs for ETL processingStrong SQL & DWH concepts