Lakshmi V Email and Phone Number
With over 8 years of experience in data engineering, I have a proven track record of transforming complex data challenges into impactful solutions. My expertise spans cloud platforms such as AWS and GCP, where I have successfully led initiatives to migrate on-premises data to the cloud, leveraging technologies like S3, EC2, EMR, Redshift, Lambda, Glue, BigQuery, and Dataproc.I specialize in designing and implementing robust, scalable data processes and pipelines using Python, PySpark, Hive, and Kafka. My in-depth knowledge of data modeling, warehousing, and ETL processes has allowed me to optimize data workflows and improve data quality. I have hands-on experience with Hadoop ecosystems, NoSQL databases, and advanced analytics using tools like Apache Airflow for job scheduling.My strong background in machine learning and predictive modeling, coupled with my skills in Python libraries such as Pandas, NumPy, and Scikit-learn, enables me to deliver data-driven insights and drive business value. I am passionate about leveraging cutting-edge technologies to create scalable, efficient, and secure data solutions.
-
Senior Data EngineerCapco Apr 2023 - PresentChicago, Illinois, United StatesLeveraged expertise in AWS, Python, and SQL to develop and optimize robust data pipelines, significantly improving data processing accuracy and speed. Designed and executed efficient ETL workflows using DBT and Snowflake, and utilized AWS Glue for advanced data cataloging. Spearheaded real-time data streaming projects with Apache Kafka, driving significant improvements in data processing within the financial services sector. Managed large-scale data storage on AWS S3, ensuring compliance and data security. Enhanced automation with Apache Airflow and optimized data models in Teradata and MongoDB for complex financial analysis. Coordinated cross-functional projects to align data engineering efforts with business objectives. -
Data EngineerDeutsche Bank Mar 2021 - Apr 2023New York, United StatesAs a Data Engineer at Deutsche Bank, I played a pivotal role in enhancing the bank's data infrastructure and analytics capabilities. I engineered robust data integration solutions using AWS S3 and DynamoDB, which streamlined data storage and retrieval for banking operations. By automating ETL workflows with Apache NiFi and AWS Glue, I significantly improved the efficiency and reliability of data pipelines. I also implemented continuous integration and deployment pipelines with Docker and Git, fostering greater agility and productivity within the team. Leveraging Tableau for data visualization, I provided critical business insights through interactive dashboards, supporting key decision-making processes. My work included optimizing data workflows with Informatica, ensuring data quality and compliance with stringent banking regulations. Additionally, I led the migration of legacy systems to cloud-based platforms, modernizing the data architecture to support scalable and secure banking operations. Throughout my tenure, I collaborated closely with IT and business teams to align data integration efforts with the bank's strategic goals, continually advancing our data infrastructure to meet evolving demands. -
Hadoop EngineerFarmers Insurance Jan 2019 - Feb 2021Woodland, California, United StatesAs a Hadoop Engineer at Farmers Insurance from January 2019 to February 2021, I led large-scale data migrations to Azure SQL Database and Azure Cosmos DB, ensuring seamless data transitions for the insurance sector. I optimized data warehousing by configuring Snowflake schema on Azure HDInsight and developed real-time data solutions using Apache Kafka and Stream Analytics, enhancing decision-making capabilities. My role also involved implementing robust data security measures with Databricks and Azure Data Lake Storage (ADLS), automating infrastructure management with Terraform, and designing data integration pipelines with Apache Hadoop. I improved data processing efficiency by leveraging Scala on Databricks and managing PostgreSQL databases on Azure for critical applications. My Python-based pipelines enhanced data reliability, while integrating Kafka with existing systems facilitated effective data ingestion. Additionally, I orchestrated automation for data backup and recovery, supported custom data models in Azure Cosmos DB, and utilized SQL and Python for complex queries and scripts. My contributions included training technical teams, aligning data strategies with business objectives, and monitoring performance with Azure Monitor, significantly enhancing the analytics and operational capabilities of the insurance data systems. -
Sql DeveloperFugenx Technologies Jul 2017 - Oct 2018Hyderabad, Telangana, IndiaAs a SQL Developer at FuGenX Technologies in Hyderabad from July 2017 to October 2018, I developed and optimized complex SQL queries and scripts for MySQL and PostgreSQL, enhancing data retrieval and reporting functionalities. I implemented data integration and transformation processes using Talend, which streamlined data flow between systems, and utilized Python to automate data manipulation tasks, boosting process efficiency. I also managed data backups and recovery processes with Git, ensuring data security and consistency.In addition to creating dynamic reports and dashboards with Power BI, I focused on database tuning and optimization to improve performance and access speeds. I designed and developed robust database schemas, collaborated on resolving SQL performance issues, and conducted data analysis to translate complex data into actionable insights. I supported project management through documentation and version control, trained new team members on SQL best practices, developed automated testing scripts, and contributed to database migration projects and data security protocols. -
Data AnalystIlink Digital Jan 2016 - Jun 2017Pune, Maharashtra, IndiaAs a Data Analyst at ilink Digital in Pune from January 2016 to June 2017, I focused on enhancing data quality through Python and SQL, ensuring accuracy and consistency across various data sources. I developed and maintained data visualization dashboards with QlikView and Tableau, delivering actionable insights that supported strategic decision-making. I managed version control and workflow using Subversion (SVN), which improved team collaboration and project tracking.I utilized Apache Hadoop and Apache Hive to process large datasets, boosting analytics capabilities and system performance. My role included optimizing SQL queries, integrating and analyzing data with Talend, and creating dynamic reports with Power BI. I performed advanced data cleansing, conducted data audits, and developed ETL processes for new data sources. Additionally, I trained team members on best practices, implemented data validation checks, and collaborated with IT to enhance data governance and security measures.
Frequently Asked Questions about Lakshmi V
What company does Lakshmi V work for?
Lakshmi V works for Capco
What is Lakshmi V's role at the current company?
Lakshmi V's current role is Senior Big Data Engineer at Capco | Actively seeking for New Oppourtunities | Data Engineer | Big Data | SQL | AWS | Hadoop | Azure | PySpark | ETL.
Who are Lakshmi V's colleagues?
Lakshmi V's colleagues are Vishaka Naik, Paolo La Torre, Danielle Rodrigues, Priyanka Panda, Allan Cuttle, John Cleary, Jhonny Uray.
Not the Lakshmi V you were looking for?
Free Chrome Extension
Find emails, phones & company data instantly
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial