Ranjeeth Kumar Email and Phone Number
Results-driven Data Engineer with over 9+ years of IT expertise, specializing in the end-to-end lifecycle of software applications within the Hadoop ecosystem. Adept at designing, developing, and implementing robust solutions for handling large datasets, both structured and unstructured, with a focus on ETL processes and RDBMS.Proven expertise in Spark application development using RDD, Spark-SQL, and Dataframe APIs, showcasing a deep understanding of distributed, high-performance systems. Skilled in optimizing MapReduce algorithms and leveraging Spark's in-memory capabilities to deliver superior results for extensive datasets. Proficient in handling large datasets through effective partitioning, Spark broadcasts, and implementing transformations during the ingestion process.Demonstrated proficiency in writing Map-Reduce jobs in Python, processing structured, semi-structured, and unstructured data, storing them in HDFS. Extensive experience in Business Intelligence solutions, encompassing data warehousing, ETL scripting, and utilizing tools such as Informatica, Pentaho, and Sync Sort. Skilled in structural modifications using Map-Reduce, Hive, and proficient in data analysis through visualization/reporting tools like Tableau.A seasoned professional in Data Modeling, specializing in Dimensional and Relational concepts such as Star-Schema and Fact-Dimension tables. Proficient in Pig scripts and Hive Query Language, adept at managing HIVE data warehouse infrastructure, including table creation, data distribution, and query optimization.Extensive hands-on experience with AWS services, including EC2, S3, RDS, VPC, IAM, Elastic Load Balancing, Auto Scaling, CloudFront, CloudWatch, SNS, SES, and SQS. Well-versed in handling diverse file formats like Text, CSV, Parquet, and JSON.Solid database design and modeling background, with expertise in migration and development across MySQL, MS SQL Server, DB2, and Oracle. Proven ability to design and implement stored procedures, triggers, cursors, constraints, and functions. A reliable professional with a robust foundation in data engineering and a commitment to delivering meaningful business insights through comprehensive and innovative solutions.
Amazon
View- Website:
- amazon.com
- Employees:
- 500669
-
Senior Data EngineerAmazon Feb 2022 - PresentSeattle, Washington, United StatesLed the end-to-end development of scalable and efficient Big Data solutions using technologies such as Apache Hive, Apache Spark, Scala, and Apache Kafka.Engineered and deployed highly scalable and resilient cloud solutions on AWS, leveraging services such as Amazon EC2, S3, and AWS Lambda. Designed and developed NLP applications specifically for medical entity extraction and text mining, achieving a 20% increase in data interoperability and discoverability.Engineered a robust database architecture to store NLP entities and their relationships, enhancing data retrieval efficiency by 30%.Spearheaded the development of advanced risk management systems, enhancing system reliability and performance.Led a team in designing and implementing a comprehensive data warehouse solution, driving a 30% improvement in data processing efficiency through optimized ETL processes.• Accumulated IT experience, with a specific focus on financial projects, particularly in the domain of Market Risk, leading to a 20% reduction in risk assessment errors.• Demonstrated expertise in SQL and database programming, particularly with MS SQL Server, by developing robust database solutions that improved data retrieval times by 25%.•Developed and maintained scalable data platforms utilizing Python, Spark, and PySpark, enhancing data processing efficiency by over 30%.• Led a successful migration to PySpark on AWS, improving data processing speed and reliability by 25%.• Designed and implemented robust data pipelines on AWS, ensuring optimal data quality and integrity, which reduced data errors by 20%.• Collaborated with cross-functional teams to gather data requirements, designing and delivering data solutions that directly supported business objectives, contributing to a 15% increase in project delivery efficiency.• Implemented and managed monitoring, logging, and automation agents within AWS environments, resulting in a 35% reduction in downtime. -
Senior Data EngineerOptum Oct 2020 - Feb 2022Minneapolis, Minnesota, United States• Deep understanding of data architecture and data modeling techniques, ensuring scalability and efficiency in data handling.• Architected scalable and fault-tolerant data solutions across Azure, AWS, and Databricks.• Ensured high-quality data integration by employing advanced data validation and error handling methodologies.• Spearheaded the optimization of data processing performance through meticulous analysis and tuning.• Facilitated cross-functional team collaboration to identify and resolve complex data-related issues.• Pioneered the adoption of new data technologies and methodologies to enhance project outcomes and efficiency.• Conducted comprehensive data analysis and reporting, delivering actionable insights to stakeholders.• Developed and enforced data governance and privacy policies to comply with industry standards and regulations.• Utilized Databricks collaborative notebooks for streamlined development and communication within the team.• Proficient in implementing security and data governance best practices within cloud and big data environments.• Experienced in troubleshooting and optimizing big data processes and pipelines for enhanced data quality and performance.• Demonstrated excellent verbal and written communication skills.• Knowledgeable in developing and deploying microservices architectures to enhance application scalability.• Engaged in CI/CD practices, enhancing development workflows and product quality.• Active collaborator with cross-functional teams to drive data-driven decision making and support business intelligence.• Successfully optimized ETL components, enhancing the overall efficiency and reliability of the Data Warehouse, contributing to streamlined data processes.• Conducted regular audits and implemented enhancements, ensuring the platforms consistently met and exceeded performance expectations, contributing to a more efficient and responsive data infrastructure. -
Senior Data EngineerCentene Corporation Oct 2018 - Oct 2020Denver Metropolitan Area• Led training sessions and workshops for team members on best practices in data engineering and analytics.• Actively participated in strategic planning sessions to align data projects with organizational goals.• Fostered a culture of continuous improvement by initiating and leading retrospectives and process refinement sessions.• Applied Agile/Scrum methodologies, resolving technical challenges, and embracing Domain Driven Design principles.• Designed, developed, and maintained data pipelines using Apache Beam/Dataflow for efficient processing.• Leveraged Databricks MLflow to streamline the machine learning lifecycle, from experimentation to deployment.• Spearheaded ETL transformation with DBT, enhancing data extraction and loading with Python and Shell scripting.• Collaborated with data scientists, delivering reliable data through Apache Nifi pipelines.• Managed Snowflake database structures, optimizing performance and resolving ingestion failures.• Utilized Spark Scala to enhance MapReduce programs, improving processing efficiency. • Implemented access control and user permissions for GCP services, ensuring data security.• Supported application teams with GCP services like Dataflow and BigQuery, automating deployments.• Maintained knowledge of latest GCP technologies, optimizing data processing. • Employed Java for MapReduce jobs, processing large-scale datasets.• Implemented strategies to collect and analyze customer data for personalized experiences. • Developed scalable data processing pipelines in Python/Spark, improving efficiency by 20%.• Delivered personalized news experiences through API deployment.• Analyzed customer feedback data to refine predictive models for news personalization.• Utilized AWS Glue to integrate with Redshift and Athena for comprehensive analytics.• Designed and operationalized large-scale enterprise data solutions in Java• Developed and maintained endpoint APIs for user requests and news story responses. -
Senior Data EngineerAscena Oct 2014 - Oct 2018Mahwah, New Jersey, United States• Designing and implementing data pipelines using Azure services such as Azure Data Factory, Azure Databricks, and Azure HDInsight to import data from various sources such as RDBMS, Cloud Storage, and streaming datasets.• Developing ETL workflows using Azure Data Factory and Azure Databricks to transform and process data using languages such as Python, Scala, and SQL.• Analyzed data migration and integration error/warning reports and proposed solutions for remediation.• Creating and managing scalable data architectures using Azure services such as Azure Synapse Analytics and Azure Cosmos DB to store structured and semi-structured data and enable high-speed querying with minimal latency.• Designed and implemented data processing pipelines using Apache NiFi to ingest, transform, and deliver large volumes of data from various sources.• Optimizing query performance and data quality by implementing partitioning, clustering, and indexing techniques in Azure Synapse Analytics and Azure Cosmos DB.• Analyzed data migration and integration errors/warnings, determined root cause, and proposed solutions for remediation.• Manually validated migrated data, tracked defects, and revalidated remediated issues.• Implementing data processing jobs using Apache Spark and Azure Databricks to analyze data and handle streaming data.• Configured and monitored ApacheNifi clusters for optimal performance and scalability.• Worked closely with cross-functional teams to analyze data requirements and develop data models, ETL pipelines, and data integration solutions using Teradata on Azure Cloud.• Using various file formats such as CSV, Parquet, and ORC to store data in Azure Blob Storage and Azure Data Lake Storage. -
Python DeveloperValuelabs May 2014 - Sep 2014Hyderabad, Telangana, India• The application was created using a test-driven methodology, and unit tests were implemented using a Python unit test.• Complete data integrity was achieved throughout the successful migration of the Django database from SQLite to MySQL.• Created several sorts of reports, including table, matrix, and chart reports, as well as web reporting by modifying URL Access, using SQL Server Reporting Services (SSRS).• Worked collaboratively with the cross functional team members to debug and troubleshoot the web apps utilizing Git.• Usind Python-MySQL connector and the MySQL package, I created and executed a variety of MySQL database queries.• Created and managed databases using Python, created RESTful Web Services based on Python using SQL Alchemy and PostgreSQL.• Developed a web application using MySQL for the database, Python scripting for data processing, and HTML, CSS, jQuery, and High Charts for data visualization of the served pages.• Using Python modules like math, glob, random, NumPy, matplotlib, seaborn, and pandas, a dynamic property list was generated for each application.• Using Python-based GUI components, the navigations, paginations, filtering columns, and addition and deletion of desired columns for views were added.• Used Python and Django's view controller and templating language to develop views and templates to make a user-friendly website interface.• Used the POSTMAN tool to test APIs, using the GET, POST, PUT, and DELETE methods on each URL to examine responses and error handling.• Developed tools in Python and Bash to improve the effectiveness of the operations and application system for retail management, these tools included data conversion scripts, AMQP/Rabbit MQ, REST, JSON, and CRUD scripts for Integration.
Ranjeeth Kumar Education Details
-
Computer Software And Media Applications
Frequently Asked Questions about Ranjeeth Kumar
What company does Ranjeeth Kumar work for?
Ranjeeth Kumar works for Amazon
What is Ranjeeth Kumar's role at the current company?
Ranjeeth Kumar's current role is Sr. Data Engineer at Amazon.
What schools did Ranjeeth Kumar attend?
Ranjeeth Kumar attended Osmania University.
Who are Ranjeeth Kumar's colleagues?
Ranjeeth Kumar's colleagues are K Prabina Choudhury, Star Boy, Anonym Anonym, Saiful Fakir, Hudson Syhly, Bj Gill, Maria Paz Meneses Dittel.
Not the Ranjeeth Kumar you were looking for?
-
-
-
Ranjeeth Kumar
Atlanta, Ga3capitalone.com, barclays.com, delta.com -
Ranjeeth Kumar
Duluth, Ga
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial