Akanksha Mandala

Akanksha Mandala Email and Phone Number

Data Engineer at PIMCO Investments | Python, SQL, Scala | Pyspark | PostgreSQL, MongoDB, AWS, GCP, Azure, Redshift, Snowflake | SSIS | Tableau, Apache Hive | TensorFlow | GitLab | Kubernetes, Docker, Terraform | PowerBI @ PIMCO
newport beach, california, united states
Akanksha Mandala's Location
United States, United States
About Akanksha Mandala

With 9 years of experience, I am a proficient Data Engineer specializing in Python, SQL, Scala, and data processing frameworks like Pandas, NumPy, and Apache Spark. I excel in architecting and implementing robust data solutions, leveraging a range of databases including PostgreSQL, MongoDB, AWS S3, and Azure SQL Data Warehouse. Skilled in orchestration tools like SSIS, Apache Airflow, and AWS Glue, I ensure seamless data flow and automation. Proficient in visualization with Tableau and Apache Hive, I integrate advanced analytics with TensorFlow and manage containerized applications using Kubernetes, Docker, and Terraform. Passionate about solving complex data challenges and driving innovation.As a leader in cross-functional teams, I excel in agile project delivery and effective communication with non-technical stakeholders. Very keen about continuous learning, I stay updated with the latest trends and bring diverse industry experience, including retail and financial services. Known for high-quality, timely solutions, I optimize data processes and enhance team productivity for data-driven decision-making.

Akanksha Mandala's Current Company Details
PIMCO

Pimco

View
Data Engineer at PIMCO Investments | Python, SQL, Scala | Pyspark | PostgreSQL, MongoDB, AWS, GCP, Azure, Redshift, Snowflake | SSIS | Tableau, Apache Hive | TensorFlow | GitLab | Kubernetes, Docker, Terraform | PowerBI
newport beach, california, united states
Website:
pimco.com
Employees:
3497
Akanksha Mandala Work Experience Details
  • Pimco
    Senior Data Engineer
    Pimco Sep 2022 - Present
    Newport Beach, California, United States
    I've acquired extensive experience in architecting and implementing data solutions tailored for financial analytics, leveraging Python, SQL, and Kubeflow. My expertise includes developing efficient ETL pipelines in Snowflake and integrating MongoDB and Apache Kafka for real-time data processing. I ensure timely and accurate data delivery through Airflow automation and utilize Google Cloud Storage and BigQuery for scalable, secure data storage. Engaged with various Google Cloud Platform (GCP) services including Compute Engine, Cloud Load Balancing, Cloud Storage, and Cloud SQL. Performed troubleshooting and fine-tuning of Talend Data Fabric jobs to optimize data processing efficiency andreliability on the Google Cloud Platform (GCP).Designing data models in Teradata and employing Google Cloud Dataflow for efficient streaming and batch processing are integral parts of my skill set. Worked collaboratively with cross-functional teams to fine-tune Talend Data Fabric configurations, ensuring optimal performance within the Google Cloud Platform (GCP) ecosystem.I collaborate cross-functionally to drive innovative, data-driven solutions in finance while prioritizing data governance and security. Utilizing Pandas and DBT for data transformation and analysis in financial datasets. My proficiency extends to streamlining data pipelines, implementing collaborative analytics, and deploying advanced analytics for deep financial insights. Additionally, I focus on enhancing data visualization and ensuring compliance with financial regulations, driving innovation through technology integration.
  • Regeneron
    Big Data Engineer
    Regeneron Nov 2020 - Sep 2022
    Rensselaer, New York, United States
    Through my journey, I've gained extensive experience in scalable data processing within biotechnology, utilizing tools like Apache Beam and SBT. I've designed and managed AWS Redshift data warehouses for complex biotech data and employed AWS EMR and ETL for efficient data processing. Developing data pipelines in AWS Glue and leveraging TensorFlow for advanced analytics have been integral to my work.Architecting cloud data solutions using AWS Lake Formation and Scala, integrating DynamoDB for managing large-scale datasets, and employing Kubernetes for scalable deployment are among my key proficiencies. Configured databases on Google Cloud Platform (GCP) using RDS, established storage solutions utilizing S3 buckets, and implemented instance backups to S3 buckets. Utilized Talend Data Fabric to establish and enforce data governance policies and procedures, ensuring adherenceto data security and compliance standards on the Google Cloud Platform (GCP). I prioritize data security, compliance, and accuracy while facilitating data-driven decision-making in biotech research and development. Performed thorough performance analysis and optimization on Talend Data Fabric jobs to reduce resource consumption and mitigate cost overheads on the Google Cloud Platform (GCP). Collaborating closely with scientific teams, I integrate insights into research, implement agile methodologies for swift project delivery, and streamline data workflows for optimized research processes. My expertise extends to enhancing data visualization and driving innovation through cutting-edge technologies in biotech data analysis.
  • Albertsons
    Data Engineer
    Albertsons Jan 2018 - Oct 2020
    Boise, Idaho, United States
    I've gained extensive experience in retail analytics, utilizing Apache Spark and Pyspark for data processing. My skills include implementing scalable data storage with Apache Hadoop, AWS EMR, and AWS S3. I efficiently integrate retail data using AWS Glue and Data Pipeline, manage MongoDB databases, and employ technologies like Apache Kafka and Docker for real-time data streaming.I'm adept at developing data pipelines with Luigi, utilizing Hadoop YARN for resource management, and optimizing data workflows with MapReduce and Terraform. My expertise extends to managing AWS Redshift for high-performance data warehousing, enhancing retail data analytics with SQL, and collaborating on data-driven strategies for improved customer experiences.I focus on streamlining data workflows for optimized retail operations, implementing scalable architectures, and ensuring data governance and compliance. I prioritize data security best practices, facilitate accessibility for diverse stakeholders, and drive innovation in retail analytics through advanced data technologies.
  • Impetus
    Etl Pipe Line Developer
    Impetus Mar 2016 - Aug 2017
    Chennai, Tamil Nadu, India
    Proficient in Python-based ETL pipeline development, ensuring efficient data processing and integration.Experienced in utilizing SSIS and Azure SQL Data Warehouse for scalable data solutions, alongside Apache Airflow for automated data workflows. Integrated Talend with SSIS and Azure SQL Data Warehouse to harness scalable data solutions within Microsoft Azure environments.Skilled in managing Microsoft SQL Server databases for robust data storage and retrieval, leveraging Subversion (SVN) and Docker for version control and containerization. Implemented version control and containerization with Talend using Subversion (SVN) and Docker, enhancing collaboration and deployment processes. Enhanced analytics capabilities through efficient data source management and architecture, implementing SQL for advanced querying and manipulation. Collaborated on data projects to optimize ETL performance, scalability, and reliability, ensuring data accuracy, integrity, and security throughout the process. Implemented CI/CD pipelines utilizing Talend within Azure DevOps, ensuring seamless deployment and delivery of data solutions. Driving innovation in data engineering through advanced technology integration.
  • Metalyst Forgings Ltd
    Data Analyst
    Metalyst Forgings Ltd Apr 2014 - Jan 2016
    Hyderabad, Telangana, India
    I've honed expertise in advanced data analysis using Microsoft Excel and PostgreSQL for efficient data storage. Python, including pandas and NumPy, fuels my data processing proficiency, while Apache Hive manages large datasets effectively.Tableau crafts insightful visualizations, and QlikView drives real-time analysis. GitLab ensures version control, and I integrate diverse data sources for accuracy. I specialize in data warehouse design, SQL-based solutions, and collaborative, compliance-driven projects.

Frequently Asked Questions about Akanksha Mandala

What company does Akanksha Mandala work for?

Akanksha Mandala works for Pimco

What is Akanksha Mandala's role at the current company?

Akanksha Mandala's current role is Data Engineer at PIMCO Investments | Python, SQL, Scala | Pyspark | PostgreSQL, MongoDB, AWS, GCP, Azure, Redshift, Snowflake | SSIS | Tableau, Apache Hive | TensorFlow | GitLab | Kubernetes, Docker, Terraform | PowerBI.

Who are Akanksha Mandala's colleagues?

Akanksha Mandala's colleagues are Shawn Carlin, Mirna Elaasar, Andrew Freeman, Tatiana Avanesyan, James Ingram, Rob Renkins, Juan Herrera.

Not the Akanksha Mandala you were looking for?

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.