Raja Sekhar

Raja Sekhar Email and Phone Number

GCP Data Engineer - Python | SQL | Pyspark | Hadoop| Terraform | Data flow | Air flow | GCP | Big Query
Raja Sekhar's Location
Hyderabad, Telangana, India, India
About Raja Sekhar

• Over 12 years of professional IT experience and 6 years experience as GCP data engineer.• Hands on experience in GCP, Big Query, GCS bucket, G - cloud function, cloud dataflow, Pub/Sub, cloud shell, GSUTIL, BQ command line utilities, Data Proc• Expertise in Google cloud, Big Query, Terraform.• Experience with On-Prem to cloud data migration projects• Experience in building and architecting multiple Data pipelines, end to end ETL and ELT process for Data ingestion and transformation in GCP.• Good Knowledge in Automotve finance domain , healthcare domain and loan processing domain.• Have good Programming experience with Python• Expertise in Databricks for processing data.• Expertise in Airflow, Cloud data fusion, data flow, Cloud SQL, Looker• Have written python DAGs in airflow which orchestrates end to end data pipelines for multiple applications• Expertise in Google cloud, Big Query processing & cost optimization.• Expertise in AI applications and ML model deployment in cloud environment• Experience in creating IAM data policies to restrict access.• Good experience in Bigquery object creation, backup, and recovery.• Experience with Kafka, Pub/Sub to handle real-time streaming data feeds.• Expertise in Terraform scripts for automation and Perform Terraform deployments

Raja Sekhar's Current Company Details

GCP Data Engineer - Python | SQL | Pyspark | Hadoop| Terraform | Data flow | Air flow | GCP | Big Query
Raja Sekhar Work Experience Details
  • Blue Cross Blue Shield
    Gcp Data Engineer
    Blue Cross Blue Shield Sep 2023 - Jan 2024
    Cary, North Carolina, United States
    Roles & Responsibilities:• Expertise in working with GCP, Big Query SQL.• Good experience on Python scripting and PySpark.• Experience in building and architecting multiple Data pipelines, end to end ETL and ELT process for Data ingestion and transformation in GCP and coordinate task among the team.• Hands on experience in GCP, Big Query, Cloud SQL, cloud function, cloud dataflow, Pub/Sub, cloud shell, GSUTIL, BQ command line utilities, Data Proc• Migrated… Show more Roles & Responsibilities:• Expertise in working with GCP, Big Query SQL.• Good experience on Python scripting and PySpark.• Experience in building and architecting multiple Data pipelines, end to end ETL and ELT process for Data ingestion and transformation in GCP and coordinate task among the team.• Hands on experience in GCP, Big Query, Cloud SQL, cloud function, cloud dataflow, Pub/Sub, cloud shell, GSUTIL, BQ command line utilities, Data Proc• Migrated Teradata,Batch and real time workloads and datasets to GCP BigQuery.• Experienced in designing and implementing database schemas, tables using Cloud SQL• Experience with migrating on-premises databases or other cloud-based databases to Google Cloud SQL• Worked on Databricks for transformations/processing data.• Design, develop, test, implement, and integrate Identity and Access Management (IAM) systems• created Custom roles which provide IAM granular access according to a user-specified list of permissions.• Setup and maintained logging and monitoring subsystems using tools like Prometheus , Grafana• Created ELT jobs to handle streaming data using PUB/SUB and dataflow. • Implemented CI/CD pipelines using Jenkins, Tekton and cloud build.• Experience in fact dimensional modeling (Star schema, Snow flake schema), transactional modeling and SCD (Slowly changing dimension)• Expertise in working with Airflow which orchestrate end to end data pipelines• Worked on Kubernetes to orchestrate the deployment, scaling and management of Docker Containers• storage, logging and monitoring and security services• Expertise in writing infrastructure as a code (IaC) in Terraform. Create reusable Terraform modules in GCP and Bigquery.Skills: GCP, Big Query, Python, Data Fusion, Airflow, data flow, composer, Looker, Terraform, Linux, Kubernetes, Big Data, Hadoop, AWS,Prometheus , Grafana ,Spark, Hive. Show less
  • Ford Motor Company
    Gcp Data Engineer
    Ford Motor Company Jan 2018 - Mar 2023
    Chennai, Tamil Nadu, India
    Roles & Responsibilities:• Experience in fact dimensional modeling (Star schema, Snow flake schema), transactional modeling and SCD (Slowly changing dimension)• Worked on Big Query, GCS bucket, G - cloud function, cloud dataflow, Pub/Sub, cloud shell, GSUTIL, BQ command line utilities• Experience in Big Query processing & cost optimization.• Experience in building and maintaining data pipelines in GCP.• Analyzed the system for new enhancements/functionalities and perform… Show more Roles & Responsibilities:• Experience in fact dimensional modeling (Star schema, Snow flake schema), transactional modeling and SCD (Slowly changing dimension)• Worked on Big Query, GCS bucket, G - cloud function, cloud dataflow, Pub/Sub, cloud shell, GSUTIL, BQ command line utilities• Experience in Big Query processing & cost optimization.• Experience in building and maintaining data pipelines in GCP.• Analyzed the system for new enhancements/functionalities and perform Impact analysis of the application for implementing ETL changes.• Experience in reading and writing multiple data formats like JSON,ORC,Parquet on HDFS using PySpark.• Experience in creating IAM data policies to restrict user access. • Build data pipelines in airflow in GCP for ETL related jobs using different airflow operators.• Implemented various frameworks like Data Quality Analysis, Data Governance, Data Trending, DataValidation and Data Profiling with the help of technologies like Bigadata,Scala,Spark, Python.• Created Big Query authorized views for row level security or exposing the data to other teams.• Building a Scala and spark based configurable framework to connect common Data sources like MYSQL, Oracle, Postgres, SQL Server, Bigquery and load it in Bigquery• created Scala program for spark transformation in Dataproc to implement end to end data pipeline for batch processing • Experience with migrating on-premises databases or other cloud-based databases to Cloud SQL• Skills: Big Query, Python, Data Fusion, Airflow, data flow, Looker,Pyspark,Dataproc, Terraform, Linux Show less
  • At&T
    Etl Developer
    At&T Apr 2013 - Mar 2015
    Hyderabad, Telangana, India
    Roles & Responsibilities:• Gathered requirements from business stakeholders.• Updated requirement documents for different business units.• Communicated with business users to address issues related to new requirements, data, and business logic updates.• Collaborated with Data Architects to resolve issues concerning requirements, data, and business logic.• Validated test data after ETL team developed new requirements.• Implemented performance tuning measures for reports… Show more Roles & Responsibilities:• Gathered requirements from business stakeholders.• Updated requirement documents for different business units.• Communicated with business users to address issues related to new requirements, data, and business logic updates.• Collaborated with Data Architects to resolve issues concerning requirements, data, and business logic.• Validated test data after ETL team developed new requirements.• Implemented performance tuning measures for reports and ETL processes.• Utilized indexing and partitioning techniques for improved performance.• Generated and applied MicroStrategy Schema and Application objects, including facts, attributes, reports, dashboards, filters, metrics, and templates using MicroStrategy Desktop.• Created and tested UNIX shell scripts for executing Teradata scripts.• Employed various Teradata indexing techniques to enhance query performance.• Developed unit test plans to validate code before QA handover.• Assisted users in extracting Mainframe Flat Files onto UNIX Server and converting them into Teradata Tables using BASE SAS Programs.Skills: Teradata Sql, Teradata viewpoint, PMON, SQL Assistant, TD Administrator,              Tivoli Scheduler, UNIX, Teradata utilities (BTEQ, FastLoad, MultiLoad, Fast Export) Show less
  • Ge Healthcare
    Etl Developer
    Ge Healthcare Mar 2011 - Feb 2013
    Hyderabad, Telangana, India
    Roles & Responsibilities:• Developed BTEQ script for pre population of the worktables prior to the main load process and performed the transformation in the later stages.• Performance tuning was done at the functional level and map level.• Used relational SQL wherever possible to minimize the data transfer over the network.• Developed Unix script to sftp, archive, cleanse and process many flat files.• Created and ran Pre-existing and debug sessions in the Debugger to monitor… Show more Roles & Responsibilities:• Developed BTEQ script for pre population of the worktables prior to the main load process and performed the transformation in the later stages.• Performance tuning was done at the functional level and map level.• Used relational SQL wherever possible to minimize the data transfer over the network.• Developed Unix script to sftp, archive, cleanse and process many flat files.• Created and ran Pre-existing and debug sessions in the Debugger to monitor and test the sessions prior to their normal run in the Workflow Manager.• Extensively worked in migrating the mappings, worklets and workflows within the repository from one folder to another folder as well as among the different repositories.• Created Mapping parameters and Variables and written parameter files.• Designed and Implemented Tables, Functions, Stored Procedures and Triggers in SQL Server 2008.• Wrote the SQL queries, Stored Procedures and Views.• Performed application-level activities creating tables, indexes, monitored and tuned Teradata BETQ scripts.• Written several Teradata BTEQ scripts for reporting purpose.• Developed BTEQ scripts to load data from Teradata Staging area to Teradata data mart.• Developed Teradata utilities to populate the data into EDW like Fast load, BTEQ, Fast Export and Multi Load.• Analyze & translate functional specifications & change requests into technical specifications.Skills: UNIX, Teradata SQL, Teradata viewpoint, TD Administrator, Net vault, Bteq, Fast Load, Multiload, SQL Assistant, Tivoli Scheduler. Informatica Show less

Frequently Asked Questions about Raja Sekhar

What is Raja Sekhar's role at the current company?

Raja Sekhar's current role is GCP Data Engineer - Python | SQL | Pyspark | Hadoop| Terraform | Data flow | Air flow | GCP | Big Query.

Not the Raja Sekhar you were looking for?

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.