Bhavana V.

Bhavana V. Email and Phone Number

Actively Seeking Opportunities on C2C | Open to Re-locate across the USA | Senior Python Data Engineer @ Credit Suisse | REST APIs, data processing @ Credit Suisse
zurich, zurich, switzerland
Bhavana V.'s Location
United States, United States
About Bhavana V.

As a Senior Python Data Engineer at Credit Suisse, I develop and maintain a data platform that supports various business functions and processes. I have extensive experience in working with different data sources, formats, and frameworks, such as Hive, MySQL, SQOOP, HDFS, Java, JDBC, RDBMS, Shell Scripting, Spreadsheets, and Text files. I also create and test REST APIs using Python with Flask and Django frameworks, and integrate them with other applications and systems.In my previous roles, I have demonstrated my skills in building data pipelines, implementing Terraform scripts, managing AWS and GCP infrastructure, developing predictive analytics, writing Ansible playbooks, and creating complex stored procedures and reports. I enjoy working with diverse and cross-functional teams, and contributing to the design and delivery of innovative and scalable data solutions. I am motivated by the challenges and opportunities that data engineering offers, and I am always eager to learn new technologies and best practices. I believe that I can bring value to any organization that is looking for a data engineer who can handle complex data problems and provide reliable and efficient data services.

Bhavana V.'s Current Company Details
Credit Suisse

Credit Suisse

View
Actively Seeking Opportunities on C2C | Open to Re-locate across the USA | Senior Python Data Engineer @ Credit Suisse | REST APIs, data processing
zurich, zurich, switzerland
Employees:
53360
Bhavana V. Work Experience Details
  • Credit Suisse
    Senior Python Data Engineer
    Credit Suisse Mar 2021 - Present
    New York, New York, United States
    • Developed a data platform from scratch and took part in the requirement gathering and analysis phase of the project in documenting the business requirements.• Worked in designing tables in Hive, and MYSQL using SQOOP and processing data like importing and exporting databases to the HDFS, involved in processing large datasets of different forms including structured, semi-structured, and unstructured data.• Developed rest APIs using Python with Flask and Django framework and the… Show more • Developed a data platform from scratch and took part in the requirement gathering and analysis phase of the project in documenting the business requirements.• Worked in designing tables in Hive, and MYSQL using SQOOP and processing data like importing and exporting databases to the HDFS, involved in processing large datasets of different forms including structured, semi-structured, and unstructured data.• Developed rest APIs using Python with Flask and Django framework and the integration of various data sources including Java, JDBC, RDBMS, Shell Scripting, Spreadsheets, and Text files.• Worked with Hadoop architecture and the daemons of Hadoop including Name-Node, Data Node, Job Tracker, Task Tracker, and Resource Manager.• Prepared scripts to automate the ingestion process using Python and Scala as needed through various sources such as API, AWS S3, Teradata, and Snowflake.• Convert legacy reports from SAS, Looker, Access, Excel, and SSRS into Azure Power BI and Tableau.• Created scripts to read CSV, JSON, and parquet files from S3 buckets in Python and loaded them into AWS S3, DynamoDB, and Snowflake.• Used AWS data pipeline for Data Extraction, Transformation, and Loading from homogeneous or heterogeneous data sources and built various graphs for business decision-making using the Python matplot library• Developed scripts to load data to hive from HDFS and was involved in ingesting data into the Data Warehouse using various data loading techniques.• Used AWS Glue for transformations and AWS Lambda to automate the process.• Created monitors, alarms, notifications, and logs for Lambda functions, and Glue Jobs using CloudWatch.• Created multiple Glue ETL jobs in Glue Studio and then processed the data by using different transformations and then loaded into S3, Redshift, and RDS. Show less
  • Metlife
    Mid-Level Python Software Developer
    Metlife Oct 2019 - Feb 2021
    New York, New York, United States
    • Created methods (get, post, put, delete) to make requests to the API server and tested Restful API using Postman. Also used Loaded CloudWatch Logs to S3 and then loaded them into Kinesis Streams for Data Processing.• Created Terraform scripts for EC2 instances, Elastic Load balancers, and S3 buckets. Implemented Terraform to manage the AWS infrastructure and managed servers using configuration management tools like Chef and Ansible.• Implemented Integration test cases and Developed… Show more • Created methods (get, post, put, delete) to make requests to the API server and tested Restful API using Postman. Also used Loaded CloudWatch Logs to S3 and then loaded them into Kinesis Streams for Data Processing.• Created Terraform scripts for EC2 instances, Elastic Load balancers, and S3 buckets. Implemented Terraform to manage the AWS infrastructure and managed servers using configuration management tools like Chef and Ansible.• Implemented Integration test cases and Developed predictive analytics using Apache Spark Scala APIs and Used REST and SOAPUI for testing Web services for server-side changes.• Wrote ANSIBLE Playbooks with Python, SSH as the Wrapper to Manage Configurations of AWS Nodes, and Test Playbooks on AWS instances using Python. Run Ansible Scripts to provision Dev servers.• Wrote Python scripts to parse XML documents and load the data in the database. Developed and designed an API (RESTful Web Services). Responsible for user validations on the client side as well as the server side.• Development of Python APIs to dump the array structures in the Processor at the failure point for debugging. Handling Web applications - UI security, logging, backend services.• Written functional API test cases for testing REST APIs with Postman and Integrated with the Jenkins server to build scripts.• Design and Develop ETL Processes in AWS Glue to migrate data from external sources like S3, Parquet/Text Files into AWS Redshift.• Used AWS glue catalog with crawler to get the data from S3 and perform SQL query operations using AWS Athena.• Created Lambda functions to run the AWS Glue job based on the AWS S3 events.• Used various AWS services including S3, EC2, AWS Glue, Athena, RedShift, EMR, SNS, SQS, DMS, and Kenesis.• Extracted data from multiple source systems S3, Redshift, RDS and Created multiple tables/databases in Glue Catalog by creating Glue Crawlers.• Create a Pyspark frame to bring data from DB2 to Amazon S3. Show less
  • Federal Deposit Insurance Corporation (Fdic)
    Jr. Python/Sql Developer
    Federal Deposit Insurance Corporation (Fdic) Mar 2018 - Sep 2019
    Arlington, Virginia, United States
    • Gathered business requirements from the end users.• Involved in creating complex stored procedures according to business logic and also in the creation Of RDL for reporting.• Build data pipelines in airflow in GCP for ETL-related jobs using different airflow operators.• Experience in GCP Dataproc, GCS, Cloud functions, and BigQuery.• Experience in moving data between GCP and Azure using Azure Data Factory.• Worked with nested Stored Procedures and Functions.• Involved… Show more • Gathered business requirements from the end users.• Involved in creating complex stored procedures according to business logic and also in the creation Of RDL for reporting.• Build data pipelines in airflow in GCP for ETL-related jobs using different airflow operators.• Experience in GCP Dataproc, GCS, Cloud functions, and BigQuery.• Experience in moving data between GCP and Azure using Azure Data Factory.• Worked with nested Stored Procedures and Functions.• Involved in the performance tuning of the existing SQL code.• Coded with complex customer data related to events such as Delinquency, Bankruptcy, Collections, etc.• Created objects like tables and views and developed SSIS packages to load data.• Implemented Incremental Loading into the target tables/views in SSIS.• Designed a library for emailing executive reports from Tableau REST API using Python, Kubernetes, Git, Azure, CodeBuild, and Airflow.• Utilized PyTorch to develop fraud detection models, enhancing the accuracy of real-time fraud detection systems and minimizing financial risks for clients.• Administered Tableau server including creating User Rights Matrix for permissions and roles, monitoring report usage, and creating sites for various departments.• Created SSIS packages for moving data between databases.• Involved in maintaining the Error-Logging for the SSIS packages.• The data is moved with one-to-many relationships between tables and many-to-one relationship• between tables.• Used cloud shell SDK in GCP to configure the services Data Proc, Storage, and BigQuery.• Install and configure Apache Airflow for the S3 bucket and Snowflake data warehouse and create dags to run the Airflow.• Wrote, compiled, and executed programs as necessary using Apache Spark in Scala to perform ETL jobs with ingested data.• Use Lambda functions and Step Functions to trigger Glue Jobs and orchestrate the data pipeline. Show less
  • Federal Deposit Insurance Corporation (Fdic)
    Software Engineer Intern
    Federal Deposit Insurance Corporation (Fdic) Sep 2017 - Feb 2018
    Arlington, Virginia, United States
    • Developed and deployed data pipelines in the cloud such as Azure and GCP Performed data engineering functions: data extraction, transformation, loading, and integration in support of enterprise data infrastructures, data warehouse, operational data stores, and master data management.• Developed SSIS Packages and stored procedures to retrieve all accounts from various systems like Settlements, Foreclosures, Transactions, and Mortgages to name a few.• Created mock-up data for testing… Show more • Developed and deployed data pipelines in the cloud such as Azure and GCP Performed data engineering functions: data extraction, transformation, loading, and integration in support of enterprise data infrastructures, data warehouse, operational data stores, and master data management.• Developed SSIS Packages and stored procedures to retrieve all accounts from various systems like Settlements, Foreclosures, Transactions, and Mortgages to name a few.• Created mock-up data for testing the reports.• Implemented Apache Airflow for authoring, scheduling, and monitoring Data Pipelines Proficient in Machine Learning techniques (Decision Trees, Linear/Logistic Regressors) and Statistical Modeling.• Involved in the performance tuning of table-valued functions.• Responsible for creating reports using SSRS.• Used Looker reports for bringing data from clients.• Maintained and developed complex SQL queries, stored procedures, views, functions, and reports that meet customer requirements using Microsoft SQL Server 2008 R2.• Created Drill Down, Drill Through, and Cascaded Parameterized reports according to the customer requirements. Show less
  • Panasonic India
    Python Software Developer
    Panasonic India Jun 2015 - Dec 2016
    Bengaluru, Karnataka, India
    • Automate different workflows, which are initiated manually with Python scripts and Unix shell scripting.• Create, activate, and program in the Anaconda environment.• Use Python unit and functional testing modules such as unit test, unittest2, mock, and custom frameworks in line with Agile Software Development methodologies.• Develop Sqoop scripts to handle change data capture for processing incremental records between newly arrived and existing data in RDBMS tables.• Installed… Show more • Automate different workflows, which are initiated manually with Python scripts and Unix shell scripting.• Create, activate, and program in the Anaconda environment.• Use Python unit and functional testing modules such as unit test, unittest2, mock, and custom frameworks in line with Agile Software Development methodologies.• Develop Sqoop scripts to handle change data capture for processing incremental records between newly arrived and existing data in RDBMS tables.• Installed Hadoop, Map Reduce, HDFS, and AWS and developed multiple MapReduce jobs in PIG and Hive for data cleaning and pre-processing.• Manage datasets using Panda data frames and MySQL queried MYSQL database queries from Python using Python-MySQL connector and MySQL dB package to retrieve information.• Involved in Web/Application development using Python 3.5, HTML5, CSS3, AJAX, JSON, and jQuery.• Developed and tested many features for the dashboard using Python, Java, Bootstrap, CSS, JavaScript, and jQuery.• Generated Python Django forms to record data of online users and used PyTest for writing test cases.• Implemented and modified various SQL queries and Functions, Cursors, and Triggers as per the client's requirements.• Clean data and process third-party spending data into maneuverable deliverables within specific format with Excel macros and Python libraries such as NumPy, SQL Alchemy, and matplotlib.• Used Pandas as API to put the data as time series and tabular format for manipulation and retrieval of data.• Helped with the migration from the old server to the Jira database (Matching Fields) with Python scripts for transferring and verifying the information.• Analyze Format data using a Machine Learning algorithm by Python Scikit-Learn.• Experience in Python, Jupyter, and Scientific computing stack (NumPy, SciPy, pandas, and matplotlib). Show less

Frequently Asked Questions about Bhavana V.

What company does Bhavana V. work for?

Bhavana V. works for Credit Suisse

What is Bhavana V.'s role at the current company?

Bhavana V.'s current role is Actively Seeking Opportunities on C2C | Open to Re-locate across the USA | Senior Python Data Engineer @ Credit Suisse | REST APIs, data processing.

Who are Bhavana V.'s colleagues?

Bhavana V.'s colleagues are Deepak Kumar Srivastava, Harshita T., Nadejda Zaets, Rebekah Harper, Laure Walther, Amit Bhattacharya, Gabriela Zatorska.

Not the Bhavana V. you were looking for?

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.