Niranjan R

Niranjan R Email and Phone Number

Azure Data Engineer @
Niranjan R's Location
United States, United States
About Niranjan R

Niranjan R is a Azure Data Engineer at ESI.

Niranjan R's Current Company Details
ESI

Esi

Azure Data Engineer
Niranjan R Work Experience Details
  • Esi
    Azure Data Engineer
    Esi Jun 2022 - Present
    California, United States
    • Collaborated with Business Analysts and Solution Architects to gather client requirements and translated them into Azure-based design architectures.• Created High-Level Technical Design and Application Design documents, ensuring clear and comprehensive documentation for stakeholders.• Developed and managed serverless architecture and Infrastructure as Code (IaC) using AWS CDK for automated deployment and management of resources.• Streamlined data pipeline development with AWS CodePipeline and CodeBuild, enabling continuous integration and delivery (CI/CD) for data workflows.• Engineered complex data transformations and manipulations using ADF and PySpark with Databricks.• Implemented data quality assurance techniques with Great Expectations, integrating them with AWS services for robust data validation.• Orchestrated deployment workflows for data pipelines, ETL jobs, and streaming applications to ensure reliable and efficient delivery of changes to production environments.• Designed and implemented interfaces using Azure Data Share for seamless file transfers.• Orchestrated complex data pipelines with Apache Airflow, integrated seamlessly with AWS services to ensure reliable data processing and workflow automation.• Ensured data quality and consistency using Great Expectations, incorporating automated validation and testing within AWS data pipelines.• Utilized AWS Lake Formation to build a secure and scalable Data Lakehouse, facilitating efficient data management and analytics.• Implemented a Power BI integration module for canned reports from ADLS2.• Developed and optimized SQL views and stored procedures in Azure SQL DW for enhanced reporting capabilities.• Implemented and enforced data governance policies and procedures using Azure Policy to ensure compliance with regulatory requirements.
  • Carefirst
    Data Engineer
    Carefirst Apr 2021 - May 2022
    Nashville, Tennessee, United States
    • Successfully executed migration of legacy claims data from on-premises systems to Azure Data Lake, leveraging Azure Data Factory. This achievement resulted in a seamless transition and a remarkable 45% reduction in data access latency and a 40% reduction in migration downtime, enhancing operational efficiency. • Devised and constructed robust data collection frameworks by harnessing the power of Apache NiFi, enabled the ingestion, transformation, and connection of structured and unstructured data, fostering a dynamic ETL pipeline capable of accommodating diverse data sources and formats.• Significantly elevated data accuracy through the implementation of meticulous data validation and transformation routines using Databricks. The outcome was a commendable 20% reduction in data processing errors, improving overall data integrity.• Performed comprehensive analysis of patient demographics utilizing Databricks and Azure Machine Learning, unveiling a 30% higher likelihood of high-cost claims submissions among patients aged 50-65. This discovery empowered targeted interventions for enhanced healthcare management.• Leveraged the capabilities of Apache Hive to intricately model patient behavior, culminating in a noteworthy 25% decrease in false positives when identifying high-risk patients. This achievement substantially heightened the precision of health assessments.• Efficiently directed data transformation workflows using Apache Airflow, ensuring punctual and precise updates that elevated the quality of data, ultimately enhancing data-driven decision-making.• Implemented ELT processes through Snowflake, optimizing data transformation and loading procedures, thereby augmenting analytical potential.• Conceived and implemented streamlined data warehousing solutions using Azure Synapse Analytics, effectively enhancing storage optimization, query efficiency, and reporting functionalities.
  • Deutsche Bank
    Senior Data Engineer
    Deutsche Bank Nov 2018 - Mar 2021
    New York, United States
    • Ensured customer eligibility system complied with Securities and Exchange Commission (SEC) regulations for data collection, storage, and usage within the Azure environment.• Spearheaded data privacy and security measures, maintaining full compliance with SEC guidelines for handling securities-related data.• Managed distributed data processing with AWS Batch, allowing efficient execution of batch computing workloads.• Established a culture of data stewardship, ensuring data usage remained aligned with SEC regulations.• Led the development and implementation of a transformative customer eligibility project within the Azure platform.• Loaded transformed data into target systems, including databases, data warehouses, and cloud storage, employing Python libraries and custom scripts.• Utilized cutting-edge data analytics techniques to analyze customers' financial and personal data, revolutionizing the process of determining eligibility for auto financing.• Designed and implemented an efficient Extract, Transform, Load (ETL) architecture using Azure services for seamless data transfer from source servers to the Data Warehouse.• Develop Python scripts to automate failover procedures and disaster recovery drills within Azure Site Recovery, ensuring seamless recovery of data and applications in case of system failures or disasters.• Implemented automated data cleansing and integration processes, resulting in improved data quality.• Actively participated in designing and developing CI/CD pipelines for data engineering within the Azure ecosystem using Azure DevOps. Implemented automation from code commit to deployment using Azure DevOps.
  • Repco
    Data Engineer
    Repco Feb 2017 - Jul 2018
    Hyderabad, Telangana, India
    • Examined claims and supporting documentation to ensure policy compliance before processing.• Developed a strong understanding of claim processing from both client and service provider perspectives, identifying key metrics for each.• Managed policy servicing and maintenance operations, including coverage changes, beneficiary data updates, and premium payments.• Processed claims data efficiently through the system.• Possess a good understanding of Electronic Health Record (EHR) systems, including their functionalities, data models, data elements, and data privacy and security regulations.• Designed high-performance batch ETL pipelines using AWS cloud services.• Extracted data from relational databases and APIs with AWS Data Factory to store in AWS data lake storage.• Utilized PySpark scripts in AWS Databricks for data transformations and conversions.• Designed data warehousing solutions using AWS Synapse Analytics for storing and analyzing transformed data.• Implemented and designed Python microservices in the healthcare domain.• Monitored productivity and resources using AWS Log Analytics.• Implemented CI/CD pipelines with AWS DevOps for automated build, test, and deployment processes.• Utilized AWS Event Hub to capture real-time data streams and route them to the appropriate data stores.
  • Colruyt Group
    Big Data Developer
    Colruyt Group May 2014 - Jan 2017
    Hyderabad, Telangana, India
    • Implemented custom Java programs for data import/export logic, dynamic column mapping, and advanced data transformations, increasing the efficiency and flexibility of data transfers between RDBMS and Hive tables.• Created Java-based User-Defined Functions (UDFs) to extend the functionality of Hive queries, enabling custom data processing and analysis logic for improved performance and flexibility.• Developed Java-based UDFs for Pig, facilitating custom data processing and analysis within Pig scripts and enhancing large-scale data analysis capabilities, as well as integration with HBase.• Converted existing Java-based MapReduce jobs into Spark RDD transformations using Spark's Java API, improving the maintenance, scalability, and performance of data processing workflows on the Hadoop cluster.• Enhanced Apache Flume functionality with custom Java plugins to support additional data sources or sinks, ensuring seamless and reliable data ingestion from diverse sources into the Hadoop ecosystem.• Built Spark applications using Java for seamless integration with the existing Java-based codebase and libraries, enabling efficient data processing and analytics tasks on the Hadoop cluster using Spark's distributed computing capabilities.• Configured Java-based Oozie workflows to automate and schedule complex data processing workflows, integrating with Hadoop ecosystem components like MapReduce, Hive, and Spark for streamlined execution and coordination.• Created Java applications to interact with Apache HBase for real-time NoSQL database storage and retrieval of large-scale structured data, leveraging HBase's Java API for low-latency data access stored in Hadoop HDFS.

Niranjan R Education Details

Frequently Asked Questions about Niranjan R

What company does Niranjan R work for?

Niranjan R works for Esi

What is Niranjan R's role at the current company?

Niranjan R's current role is Azure Data Engineer.

What schools did Niranjan R attend?

Niranjan R attended Gitam Deemed University.

Not the Niranjan R you were looking for?

  • Niranjan R

    Full Stack Software Developer@Verizon
    Irving, Tx
  • Niranjan R

    Senior Data Engineer At Vw Credit, Inc.
    Greater Orlando
  • Niranjan R

    Sap Security | Grc | Cybersecurity | Risk Management
    Queens, Ny
  • Niranjan R

    Senior Data Engineer At Macy’S| Sql | Aws | Bigdata | Python | Power Bi | Hadoop | Kafka | Azure | Databricks | Tableau | Etl | Scala | Snowflake | Activiely Looking For C2C | C2H Positions
    Prospect, Ky

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.