Sasidhar Punna Email and Phone Number
Sasidhar Punna work email
- Valid
Sasidhar Punna personal email
Visa- H1B; I140 approvedAs a Lead Azure Data Engineer at Sleep Number Corporation, I have over 10 years of experience in deploying data warehousing methodologies and data modeling for conceptualizing and delivering user-centric solutions. I am a certified Scrum Master, AWS Developer - Associate, and Microsoft Azure Fundamentals professional, with a strong background in Information Technology and Electrical and Electronics Engineering.My core competencies include creating Spark applications on Databricks using pySpark and Spark-SQL for data extraction, transformation, and aggregation from multiple file formats, such as Parquet, JSON and Avro, for analyzing and transforming the data to uncover insights into the customer usage patterns. I also have extensive experience in managing Azure Data Lakes and Delta Lake, and migrating on-premise databases to Azure Data Lake Store using Azure Data Factory. Additionally, I am adept at utilizing BI tools, such as Power BI and QlikView, for enhancing reporting capabilities and developing BI applications in accordance with client requirements. I have demonstrated the capability to liaise with key stakeholders, work within and across Agile teams, and support technical solutions across a full stack of technologies.
Brillfy Technology Inc
View- Website:
- brillfy.com
- Employees:
- 74
-
Cloud Platform ArchitectBrillfy Technology IncMckinney, Tx, Us -
Lead Data EngineerSleep Number Corporation Jun 2021 - PresentMinneapolis, Minnesota, Us• Designed and Architected Azure Databricks workspace installations through Terraform templates.• Configured and administered Unity Catalog Metastores in multiple regions.• Developed python class modules to reduce redundant coding.• Developed custom scripts to Auto Load the raw data to Delta Lake.• Creating Databricks notebooks using SQL, Python and automated notebooks using jobs.• Creating Spark clusters and configuring high concurrency clusters using Azure Databricks to speed up the preparation of high-quality data.• Developed Spark applications using PySpark and Spark-SQL for data extraction, transformation and aggregation from multiple file formats for analyzing & transforming the data to uncover insights into the customer usage patterns.• Profound understanding in Structured Streaming and developed notebooks process real-time data.• Configured and Monitored the Databricks Compute and Compute Policies and its usages through Lake House Dashboards and PowerBI with system tables.• Designed and Developed CI/CD flows on Azure DevOps using YAML scripts for Azure Data Factory and Azure Databricks.• Developed Data Factory pipelines to transform JSON, Avro formats from REST APIs into Storage Account.• Developed IICS mappings, taskflows to heavy lift the data from On-Premise Oracle systems.• Created Scope and Access Tokens on Databricks to make pySpark scripts connect to Key Vaults.• Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse.• Profound Knowledge in Azure Virtual Networks and experience in creating Azure Databricks and Data Factory resources under it.• Created Stream Analytics jobs to process Kafka Events.• Responsible for unit testing and in creating detailed Unit Test Document with all possible Test cases/Scripts• Developed Linux scripts to automate routine tasks where appropriate. -
Data EngineerUsaa Feb 2019 - Jun 2021San Antonio, Texas, UsDescription:This Project mainly focuses on Anti Money Laundering (AML)/KYC compliance to maintain secure financial institutions. We use Datastage, Azure Data Factory(V2) & Apache Nifi as ETL tools to calculate the compliance status of the members by validating the questions answered using Db2, Netezza, REST API and Kafka events as source systems.Responsibilities:● Build Databricks notebooks for transforming data and load and read them back as Parquet.● Used Spark SQL on Databricks to merge the incremental data to Delta Tables.● Creating Databricks notebooks using SQL, Python and automated notebooks using jobs.● Profound understanding of Federal Financial Guidelines and Test-Driven Development (TDD).● Responsible for unit testing and in creating detailed Unit Test Document with all possible Test cases/Scripts● Used Control-M for scheduling Datastage jobs and used Logic Apps for scheduling ADF pipelines● Used Apache Nifi to develop a process to read Apache Kafka events.● Developed Linux scripts to integrate the data flow between Apache Nifi and Datastage Servers.Environment: Azure Data Factory(V2), Azure Data Lake Gen2, IBM Information Server 11.5, Apache Nifi, Azure Logic App, RHEL 8, PL/SQL, SQuirreL, RESTful Services, Db2, Netezza, Apache Kafka. -
Principal Integration EngineerChevron Jul 2018 - Feb 2019San Ramon, Ca, Us● Implemented procedures for identification of dimensions, measures and work aspects.● Created ADF pipelines to pull the data from Cloudera HDFS hosted on premise.● Used ADLS as intermediate Datawarehouse with help of DeltaLake.● Deployed & executed SSIS packages from ADF.● Conducted detailed analysis and understanding of existing business requirements to produce technical designs.● Provided technical assistance during requirements meetings in coordination with end users.● Developed software deployment scripts and automated deployment functions.● Supported technical team members in installation and configuration of all SQL servers.● Involved in creating Hive tables, loading with data and writing hive queries● Assisted in development and maintenance of logical and physical data models.● Executed effective processes for modification of existing SSIS packages as per new business requirements.● Developed T-SQL packages, pivot tables and MS SQL reporting services reports.Environment: SSIS, Azure Data Factory, UNIX, SQL, TOAD 8.0, Windows 10, RHEL 7. -
Senior Azure DeveloperGsk Aug 2017 - Jul 2018Brentford, Middlesex, Gb● Worked on automating manual tasks of data movement from AWS S3 to Azure Data Lake.● Created Python scripts to parse csv file and create a nested json to push into the API.● Develop Python scripts for automating Challenges ETL ingest process.● Created Python scripts to upload and download files from Azure Blob Storages.● Used Requests library to connect to different APIs with the script● Create and maintain template, definitions, and documentation for Challenges ETL ingest process● AGILE (SCRUM) practices and attending daily agile (SCRUM) meetings and SPRINT retrospective meetings to produce quality deliverables within time and setting weekly goals.● Provide engineering support when building, deploying, configuring and supporting systems for customers.● Experience working with version control systems such as Git and Apache SVN for maintaining a consistent state throughout the application development process.● Working with offshore team and get the daily updates from the team.● Documented tasks in Confluence and related them to the JIRA tickets.● Created few python scripts to create some test users for different portals in each environment like UAT, Stage.● Work closely with business groups (Analysts, Developers and Architects) to understand the business requirements and end-to-end development activities of the application.● Facilitate daily scrums, stand-ups, and meetings to monitor project progress and resolve any issues the team may be experiencing● Shape team behavior through excellent management via the agile method● Remove project obstacles, develop solutions with team● Ensure milestones are reached and deadlines are met throughout project life cycle● Build strong relationships with stakeholders, application users, and program owners● Take responsibility, organize, follow up and follow through for successful product deliveryEnvironment: Python 3.6, Azure Data Lake, RHEL 7, Nginx, AWS E3 Server, AWS S3 Bucket, RESTful servers, GitLab. -
Azure Data Ops Consultant/Python DeveloperGsk Nov 2016 - Aug 2017Brentford, Middlesex, Gb● Acting as single point of contact (SPOC) for client to facilitating rollout of new features using Datastage.● Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool and backwards.● Working on Microsoft Parallel Data Warehouse/MPP (APS 2015), Azure, AWS, Business Intelligence, Data Warehousing, ETL/ELT, Data Migration, Production support.● Worked on cloud POC to select the optimal cloud vendor (AWS, Azure, Snowflake) based on a set of rigid success criteria. ● Architecting, designing and operationalization of large-scale data and analytics solutions on Cloud Data Warehouse such as Redshift and Snowflake. ● Design, development and implementation of performant ETL pipelines using python API (pySpark) of Apache Spark on AWS EM.● Integration of data storage solutions in spark – especially with AWS S3 object storage. ● Performance tuning of pySpark scripts. ● Worked on Azure, Snowflake, Redshift cloud-based project and to design dynamic ETL solution to load the data from On-prem to Cloud.Environment: Azure Data Factory(V2), Azure DataBricks, Python 3, SSIS, Azure SQL, Azure Data Lake. -
Msbi DeveloperGsk Dec 2015 - Nov 2016Brentford, Middlesex, Gb● Extensively used ETL to load data from different source systems having different formats to target DB2 and Teradata.● Implementation and delivery of MSBI platform solutions to develop and deploy ETL, analytical, reporting and scorecard / dashboards on SQL Server 2014 using SSAS, SSIS and SSRS.● Collect business requirements and produce reports (SSRS, excel reports) accordingly.● Worked in BI monkey framework to extract data from different sources.● Involved in migration which involves adopting BI Monkey framework and new DB architecture● Monitored Full/Incremental/Daily Loads and support all scheduled ETL jobs for batch processing.● Daily monitor the SQL Server agent jobs and troubleshoots the issue and fix.● Performed Regular Database Maintenance process, checking the disk Defrag procedures in different environments.● Database Performance of Index tuning with using Database Engine Tuning Advisor to resolve Performance issues● Developed and optimized database structures, Stored Procedures, DDL triggers, SQL Server Audit and user-defined functions.● Involved in writing Stored Procedures and Functions, Common table expressions and Merge to handle database automation tasks.Environment: SQL Server-2014, ETL - SSDT-SSIS, SSAS, SSMS, T-SQL, SQL Agent, DB2, Teradata. -
Power Bi DeveloperGsk Jan 2015 - Dec 2015Brentford, Middlesex, Gb● Imported data from SQL Server DB, Azure SQL DB to Power BI to generate reports.● Embedding power bi reports to internal applications.● Created Dax Queries to generated computed columns in Power BI.● Generated computed tables in Power BI by using Dax.● Involved in creating new stored procedures and optimizing existing queries and stored procedures.● Created Azure Blob Storage for Import/Export data to/from .CSV File.● Used Power BI, Power Pivot to develop data analysis prototype, and used Power View and Power Map to visualize reports● Configure the Azure Blob Storage, where the source files are stored.● From blob storage, files are loading into Azure SQL data base using Azure data factory (V1 & V2).● In Azure SQL database we are moving data from staging to DWH using store procedures.● Tabular model is created on top of Azure SQL data base i.e. DWH schema objects.● Power BI reports are created from the Azure tabular model and published into a Power BI portal under specific workspace.● Connecting Azure Analysis Services financial database through Power bi desktop for developing reports financial report.● Creating SSAS tabular Cube through connecting Azure DB and Azure DW.● Published Power BI Reports & Dashboards available in Web clients and mobile apps.● Used Power BI Gateways to keep the dashboards and reports up to date.● Published reports and dashboards using Power BI.Environment: Power BI Desktop, Azure Data Factory (V1 & V2), Azure SQL, Azure Data Lake Storage -
Datastage AdministratorGsk Dec 2013 - Jan 2015Brentford, Middlesex, Gb• Responsible for assigning tickets for Datastage issues and resolving them within SLA.• Defined strategy for the Installation and configuration of the new environment regarding tier placement.• Installation of Information Server Patches for Service, Engine and Client Tiers.• Configured IBM Infosphere Datastage and Qualitystage, created users, groups, credential mappings and assigned administrator and user roles, privileges in different environments.• Configured Environment variables, security, and connectivity. Tuned the deployment for performance and backed up the installation.• Added, deleted and setup DataStage projects from Data stage administrator/dsadmin command and corrupted jobs/projects from DStageWrapper command.• Manage server resources allocation and monitoring them time to time. Wrote a UNIX script to automate this process.• Designed an own Control system for 13 Datastage servers for alerting during high resource usage on server using UNIX scripts. And designed a SAP BO report to view all server’s performance on weekly basis.• Ensured that the DataStage server is backed up appropriately, and the recovery of DataStage server data if necessary.• Extensively worked with IBM web console/cleanup_abandoned_locks command for unlocking the Datastage job. • Performed Unit Testing, Integration Testing and User Acceptance Testing (UAT) for every patch Installation.• Responsible for starting and stopping Datastage server during maintenance window.• Configured the server files uvobdc.confg and .odbc.ini files and setup TNS entries.• Created Configuration files to increase parallelism of Datastage jobs.• Used Datastage Director and crontab for Scheduling the sequences and jobs.• Responsible for monitoring/troubleshooting Data Stage jobs during the production data load processes
Sasidhar Punna Education Details
-
Jawaharlal Nehru Technological University, AnantapurElectrical And Electronics Engineering
Frequently Asked Questions about Sasidhar Punna
What company does Sasidhar Punna work for?
Sasidhar Punna works for Brillfy Technology Inc
What is Sasidhar Punna's role at the current company?
Sasidhar Punna's current role is Cloud Platform Architect.
What is Sasidhar Punna's email address?
Sasidhar Punna's email address is sa****@****ber.com
What schools did Sasidhar Punna attend?
Sasidhar Punna attended Jawaharlal Nehru Technological University, Anantapur.
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial