Priyanka S

Priyanka S Email and Phone Number

Data Engineer at BrokerLink : “Crafting tomorrow’s insights through data innovation.”Python | SQL | AWS | Databricks | Snowflake @ BrokerLink
ontario, canada
Priyanka S's Location
Brampton, Ontario, Canada, Canada
About Priyanka S

I’m a data engineer with proven track record of driving impactful solutions across diverse industries. At BrokerLink, I led initiatives to revolutionize data processing, spearheading projects that resulted in substantial efficiency gains and enhanced data integrity. I engineered robust data pipeline solution, which reduced processing time by 50% and significantly improved data accuracy. In my previous role at TMF Group, i orchestrated the migration of legacy systems to modern cloud infrastructure, resulting in a 30% reduction in operational costs and increased scalability to accommodate growing data volumes. With expertise in a wide array of data engineering tools and languages, I am dedicated to staying at the forefront of the rapidly evolving data landscape and continue pushing the boundaries of data engineering. I’m deeply passionate about harnessing the power of data to uncover actionable insights and drive business success.

Priyanka S's Current Company Details
BrokerLink

Brokerlink

View
Data Engineer at BrokerLink : “Crafting tomorrow’s insights through data innovation.”Python | SQL | AWS | Databricks | Snowflake
ontario, canada
Website:
brokerlink.ca
Employees:
553
Priyanka S Work Experience Details
  • Brokerlink
    Data Engineer
    Brokerlink Sep 2021 - Present
    • Building Reusable Data ingestion and Data transformation frameworks using Python• Write Terraform scripts to automate AWS services which include ELB, CloudFront• Extensively used Kubernetes and Docker for the runtime environment• Configure and optimize Hudi datasets for efficient data processing and storage.• Configured documents which allow Airflow to communicate to its PostgreSQL database.• Collaborate with the team to ensure proper integration and utilization of Hudi for real-time data processing.• Experience in the design, development and deployment of Analytical and Business Intelligence solutions using Big Data technologies Hive, PySpark, SSRS.• Assisted in designing & developing Data Lake and ETL using Python and Hadoop ecosystem• Implement AWS Elastic Container Service (ECS) scheduler to automate application deployment in the cloud using Docker Automation techniques.• Documented and communicated Hudi implementation details, ensuring clear and comprehensive documentation for the client's reference and future use.• Troubleshoot data issues in HANA and Tableau reports.• Worked on implementing Data warehouse solutions in AWS Redshift, worked on various projects to migrate data from one database to AWS Redshift, Glue, RDS, ELB, EMR, DynamoDB and S3 Perform data profiling, cleansing, and data governance on major data flows.• Worked on Oracle Databases, RedShift, PostgreSQL, Snowflakes.• Used ReactJS/AngularJS for templating for faster compilation and developing reusable components.• Architect and configure a virtual data center in the AWS cloud to support Enterprise Data Warehouse hosting including Virtual Private Cloud (VPC), Public and Private Subnets, Security Groups and Route Tables.• Implemented stable ReactJS/AngularJS components and functions• Web application to provide online. (Ruby on Rails, JavaScript, PostgreSQL, CSS, html, Ajax, soap)
  • Tmf Group
    Data Engineer
    Tmf Group Feb 2020 - Aug 2021
    Toronto, Ontario, Canada
    • Involved in all phases of SDLC (Software Development Life Cycle) right from requirement gathering and participated in meetings with business users along with business analysts.• Programmatically created CI/CD Pipelines in Jenkins using Groovy scripts. • Wrote various data normalization jobs for new data ingested into Redshift.• Used Kubernetes to deploy scale, load balance, scale and manage docker containers with multiple names spaced versions.• Design Setup maintain Administrator the Azure SQL Database, Azure Analysis Service, Azure SQL Data warehouse, Azure Data Factory, Azure SQL Data warehouse.• Managed Logical and Physical Data Models in ER Studio Repository based on the different subject area requests for integrated model. Developed Data Mapping, Data Governance, and transformation and cleansing rules involving OLTP, ODS.• Developed JSON Scripts for deploying the Pipeline in Azure Data Factory (ADF) that process the data using the SQL Activity.• Used JSON schema to define table and column mapping from S3 data to Redshift.• Good understanding of Big Data Hadoop and Yarn architecture along with various Hadoop Demons such as Job • Tracker, Task Tracker, Name Node, Data Node, Resource/Cluster Manager, and Kafka (distributed stream processing).• Responsible for migrating the database into AWS Redshift.• Used Apache Airflow in Azure composer environment to build data pipelines and used various airflow operators like bash operator, Hadoop operators and Python callable and branching operators.• Worked with Terraform Templates to automate the Azure IaaS virtual machines using terraform modules and • deployed virtual machine scale sets in production environment.• Developed spark applications in Python (PySpark) on distributed environment to load huge number of CSV files with different schema in to Hive ORC tables.• Integrated Azure Log Analytics with Azure VMs for monitoring the log files, store them.
  • Wtw
    Data Engineer
    Wtw Dec 2016 - Nov 2019
    Thane, Maharashtra, India
    • Involved in the entire project life cycle, from design discussions to production deployment.• Developing Web Application in Groovy/Grails with MongoDB as a data stores. • Provide guidance to development team working on PySpark.• Involved in designing the Data pipeline from end-to-end, to ingest data into the Data Lake• Extensive working experience using JIRA, ALM.• Developed file cleaners using Python libraries and made it clean.• Created several Databricks Spark jobs with PySpark to perform several tables to table operations.• Followed Agile Methodologies while working on the project• Developed SSIS packages to ETL data into heterogeneous data warehouses.

Priyanka S Education Details

Frequently Asked Questions about Priyanka S

What company does Priyanka S work for?

Priyanka S works for Brokerlink

What is Priyanka S's role at the current company?

Priyanka S's current role is Data Engineer at BrokerLink : “Crafting tomorrow’s insights through data innovation.”Python | SQL | AWS | Databricks | Snowflake.

What schools did Priyanka S attend?

Priyanka S attended University Of Mumbai.

Who are Priyanka S's colleagues?

Priyanka S's colleagues are Brenda Lajeunesse, Sahil Kadhiwala, Sarah Schofield, Don Smith, Debra Griese, Breanne Lanouette, Jackson K John.

Not the Priyanka S you were looking for?

  • Priyanka S

    Manager - Enterprise Productivity At Scotiabank
    Greater Toronto Area, Canada
  • Priyanka S

    Corporate Tax - Us And Canada
    Canada
  • Priyanka S

    Health Informatics Research Associate |Public Health Care | Supply Chain | Healthcare Insurance| Dental Sciences|Internationally Trained Dentist
    Greater Toronto Area, Canada
  • Priyanka s

    Toronto, On

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.