Paul J

Paul J Email and Phone Number

Data Engineer @ Worcester, MA, US
Worcester, MA, US
Paul J's Location
Worcester, Massachusetts, United States, United States
About Paul J

I am a Full Stack Developer with a specialization in Python and a passion for problem-solving. I have two years of experience as an Assistant System Engineer at Tata Consultancy Services, where I worked on projects for two prominent clients: British Telecom and Google. In my previous role, I gained valuable skills in software development life cycles, project management methodologies, and cross-functional collaboration. I also utilized SQL, JCL, JavaScript, TypeScript, and Java to develop, test, and maintain databases and applications. Currently, I am pursuing a master's degree in computer science at Clark University, where I am learning about advanced topics such as cryptography, artificial intelligence, and data mining. Additionally, I have completed multiple certifications, including Microsoft Certified: Azure Data Fundamentals and Cryptography I. I am a quick learner and a team player, who can communicate technical information to non-technical stakeholders and provide mentorship to junior engineers. I am excited to contribute my expertise to any project that I work on, and I am always seeking to expand my skillset and challenge myself. I am confident that I can make a valuable contribution to any organization as a Full Stack Developer and look forward to working on projects that push me to grow and learn.

Paul J's Current Company Details
Healthcare

Healthcare

Data Engineer
Worcester, MA, US
Paul J Work Experience Details
  • Healthcare
    Data Engineer
    Healthcare
    Worcester, Ma, Us
  • Healthcare
    Data Engineer/ Python Developer
    Healthcare Mar 2024 - Present
    United States
    • Developed robust microservices architecture to ensure scalable and efficient backend operations.• Processed over 1,000 transactions per second in Kafka.• Created RESTful APIs using FastAPI for seamless integration between services.• Create containers in Docker. Design CI/CD tools configuration Management. • Integrated Google Cloud's Confluent Kafka for high-throughput real-time streaming data pipelines.• Leveraged Kubernetes for container orchestration and seamless application deployment. Utilized Azure cloud infrastructure for developing and managing data solutions.• Managed reliable messaging with Azure Service Bus for application communication.• Expertly implemented data transformations and business logic in Python.• Led initiatives to enrich data from various domains like drugs, stores, and patients.• Monitored data pipelines and services using Grafana dashboards.• Implemented data storage and transformations in Snowflake.• Used SQL Server Profiler and Database Engine Tuning Advisor to identify and resolve performance bottlenecks.• Developed and maintained ETL processes using SQL Server Integration Services (SSIS) for data extraction, transformation, and loading.• Developed complex SQL queries, stored procedures, functions, and triggers to support application requirements.• Automated data pipeline workflows with Apache Airflow DAGs.• Managed CI/CD pipelines using Jenkins for continuous integration and deployment.• Implemented Jaeger UI for distributed tracing in microservices.• Utilized Docker for containerizing applications, ensuring consistency across environments.• Conducted data analysis and computations using Numpy. ENVIRONMENT: Python, FastAPI, Confluent Kafka, Kubernetes, Azure, Service Bus, Grafana, Snowflake, Airflow, Jenkins, Jaeger UI, Docker, Numpy, Pandas, SOAP, RESTful, Py Unit Test
  • Tata Consultancy Services
    Data Engineer/ Python Developer
    Tata Consultancy Services Apr 2021 - Aug 2022
    Hyderabad, Telangana, India
    • Working as part of a team to develop Cloud Data and Analytics solutions.• Creating and maintaining data solutions that make the best use of the Cloud platforms like Amazon Web Service (AWS) and analytics services by understanding the business, technology, and data landscape.• Designed and implemented data models for optimal storage and retrieval and to meet critical product and business requirements.• Worked with the development team to create conceptual, logical, and physical data models and data flows.• Analyzed complex data elements and systems, data flow and development of conceptual, logical, and physical data models as well as verification and implementation of ETL/ELT pipelines and transformation logic from conception to production deployment.• Performed Data wrangling of heterogeneous data and explored and discovered new insights.• Contributed to data governance and data quality best practices including design reviews, unit testing, code reviews, and continuous integration and deployment.• Design data pipelines with Apache Airflow and Manipulate data with SparkSQL and Spark Dataframes.• Worked with the Spark for improving performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, PySpark, Pair RDD's, Spark YARN. • Used Spark for interactive queries, processing of streaming data and integration with popular NoSQL database for huge volume of data. • Perform Data Cleaning, features scaling, features engineering using pandas and numpy packages in python.• Expertise in Creating, Debugging, Scheduling and Monitoring jobs using Airflow and Oozie.• Create several types of data visualizations using Python and Tableau.Environment: HDFS, MySQL, NoSQL, Erwin Data Modeler, Microsoft SQL Server PySpark, HIVE, Map Reduce, Linux, Eclipse, AWS Services, S3, Glue, Athena, Redshift, direct connect, Cloud watch, EC2, S3, RDS, EMR Cloudera, and Python.
  • Open Hardware Days
    Data Engineer
    Open Hardware Days Dec 2019 - Dec 2020
    Hyderabad, Telangana, India
    • Collaborated with cross-functional teams to design and develop Hadoop-based applications, using tools like HDFS, MapReduce, and Hive.• Processed and analyzed large datasets with Hadoop ecosystem tools, while actively adopting new technologies to meet project needs.• Spearheaded the integration of emerging technologies to enhance the efficiency and effectiveness of Hadoop-based applications.• Assisted in developing strategies for data optimization, resulting in streamlined processes and improved performance outcomes.• Used XML for dynamic display of options in select box and description on web page. • Writing Sub Queries, Stored Procedures, Triggers, Cursors, and Functions on • MySQL, PL/SQL and PostGreSQL database. • Establishing database connections for Python by configuring packages like JDBC, • MySQL-Python. • Resolved user problems in a timely and accurate manner as a member of the • End-User support team. • Created the environment specific settings for a new deployment and update the deployment specific conditional checks in code base to use specific checks. • Implemented Restful Web-Services for sending and receiving the data between multiple systems. • Handled all the client-side validation using jQuery. • Used to send text data through us sockets with most API's and JSON. • Prepared and analyzed reports using Python libraries and involved in environment Setup. • Worked with Amazon Web Services (AWS) using EC2 for hosting and Elastic map reduce (EMR) for data processing with S3 as storage mechanism.• Used Python and Django to interface with the JQuery UI and manage the storage and deletion of content. Environment: Python, Django, MySQL, JDBC, REST API, JSON, JQuery, XML

Paul J Education Details

Frequently Asked Questions about Paul J

What company does Paul J work for?

Paul J works for Healthcare

What is Paul J's role at the current company?

Paul J's current role is Data Engineer.

What schools did Paul J attend?

Paul J attended Clark University, Geethanjali College Of Engineering And Technology.

Not the Paul J you were looking for?

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.