Ajay B

Ajay B Email and Phone Number

--Senior Big Data Engineer @ Credit Suisse
zurich, zurich, switzerland
Ajay B's Location
Anoka, Minnesota, United States, United States
About Ajay B

Ajay B is a --Senior Big Data Engineer at Credit Suisse.

Ajay B's Current Company Details
Credit Suisse

Credit Suisse

View
--Senior Big Data Engineer
zurich, zurich, switzerland
Employees:
53360
Ajay B Work Experience Details
  • Credit Suisse
    Senior Big Data Engineer
    Credit Suisse Apr 2023 - Present
    New York, Ny
    As a Senior Big Data Engineer, I lead the design, development, and optimization of complex, large-scale data processing systems that efficiently handle massive volumes of data. I architect scalable data pipelines and infrastructures using advanced big data technologies like Hadoop, Spark, and NoSQL databases. I oversee the creation and enhancement of ETL (Extract, Transform, Load) processes to integrate and transform data from diverse sources, ensuring high data quality and consistency. With deep expertise in programming languages such as Java, Python, and Scala, I develop sophisticated data processing applications and algorithms. I also manage and optimize cloud-based data platforms like AWS, Google Cloud, or Azure, and use tools like Apache Airflow for workflow orchestration. In my role, I mentor junior engineers, lead technical projects, and establish best practices for data engineering. My responsibilities include advanced SQL skills for querying and analyzing large datasets and a strong understanding of data warehousing, data modeling, and real-time processing. Overall, my work is crucial in driving data-driven strategies by ensuring that robust, high-performance data systems are in place to support complex analytics and business intelligence initiatives.
  • Hudson'S Bay Company
    Big Data Engineer
    Hudson'S Bay Company Dec 2021 - Mar 2023
    Nyc, Ny
    As a Big Data Engineer, I focus on designing, implementing, and managing large-scale data processing systems that handle vast volumes of data efficiently. I work with big data technologies such as Hadoop, Spark, and NoSQL databases to build scalable data pipelines and architectures. My responsibilities include developing and optimizing ETL (Extract, Transform, Load) processes to integrate and transform data from diverse sources, ensuring high data quality and reliability. Proficiency in programming languages like Java, Python, and Scala is essential for writing complex data processing applications and scripts. I also manage cloud-based data platforms such as AWS, Google Cloud, or Azure, and use tools like Apache Airflow for orchestrating workflows. Monitoring and tuning system performance, addressing data-related issues, and ensuring the infrastructure can handle both real-time and batch processing needs are key aspects of my role. Strong skills in SQL for querying large datasets and a solid understanding of data warehousing and data modeling principles are critical. Overall, my work is crucial for enabling data-driven insights by creating robust data systems that support advanced analytics and business intelligence.
  • Baptist Memorial Hospital-Memphis
    Data Engineer
    Baptist Memorial Hospital-Memphis Jun 2020 - Nov 2021
    Memphis, Tn
    As a Data Engineer, I am responsible for designing, constructing, and maintaining scalable data pipelines and architectures that support the collection, storage, and analysis of large volumes of data. My role involves working with various data technologies, including relational databases, data warehouses, and big data platforms like Hadoop and Spark. I develop ETL (Extract, Transform, Load) processes to integrate data from multiple sources, ensuring data quality and consistency. Proficiency in programming languages such as Python, Java, or Scala is crucial for building and optimizing data workflows. I also manage data infrastructure, including setting up and configuring cloud-based solutions like AWS, Google Cloud, or Azure, and using tools like Apache Airflow for workflow automation. My responsibilities include monitoring system performance, troubleshooting issues, and ensuring that data systems run efficiently and reliably. Strong skills in SQL for querying and manipulating data, along with knowledge of data modeling and data warehousing concepts, are essential. Overall, my role is vital in enabling data-driven decision-making by ensuring that robust, efficient data pipelines are in place to deliver accurate and timely information to stakeholders.
  • Chubb Group Of Insurance Companies
    Hadoop Developer
    Chubb Group Of Insurance Companies Aug 2017 - Jul 2020
    Malvern, Pennsylvania, United States
    As a Hadoop Developer, I specialize in designing, developing, and managing data processing applications using the Hadoop ecosystem. My role involves building and optimizing scalable data pipelines and workflows with core Hadoop components like HDFS (Hadoop Distributed File System), YARN (Yet Another Resource Negotiator), and MapReduce. I am proficient in programming languages such as Java for MapReduce jobs and Python or Scala for Spark applications. Additionally, I work with tools like Apache Hive for SQL-like queries and Apache Pig for scripting data transformations. My responsibilities include setting up and configuring Hadoop clusters, monitoring performance with tools like Apache Ambari, and troubleshooting issues to ensure smooth operation of data processing tasks. I also have knowledge of data formats such as JSON, Avro, and Parquet, and experience with data integration tools like Apache Sqoop and Apache Flume. Strong problem-solving skills, attention to detail, and the ability to work collaboratively are key aspects of my role, which supports efficient handling of large-scale data and drives data-driven decision-making within the organization.
  • Idbi Bank
    Junior Hadoop Developer
    Idbi Bank Mar 2014 - Jul 2017
    Hyderabad, Telangana, India
    As a Junior Hadoop Developer, I will be pivotal in handling and processing large datasets using the Hadoop ecosystem. My role will involve setting up data ingestion pipelines with tools like Apache Flume and Sqoop, and developing and optimizing applications using Hadoop components such as HDFS, YARN, and MapReduce, along with technologies like Spark, Hive, and Pig. I’ll also manage system maintenance, including cluster configuration, performance monitoring, and troubleshooting. Key skills include a solid grasp of Hadoop technologies, programming in Java, Python, and Scala, and data querying. Effective problem-solving, attention to detail, and strong communication are essential.

Frequently Asked Questions about Ajay B

What company does Ajay B work for?

Ajay B works for Credit Suisse

What is Ajay B's role at the current company?

Ajay B's current role is --Senior Big Data Engineer.

Who are Ajay B's colleagues?

Ajay B's colleagues are Maria Oliveras-Martinez, Michelle O'reilly, Tarun Sawlani, Alona Baranovska-Tymoshchuk, Meenakshi Subramanian, Jan Schelling, Natalia B..

Not the Ajay B you were looking for?

  • Ajay B

    Clinical Data Manager At Worldwide Clinical Trials
    Grand Rapids, Mi
  • Ajay B

    Worked As A Associate Analyst At Global Logic Technologies. In The Process Of Waymo
    United States
  • Ajay B

    Working As A Sr Aws Devops Engineer | Aws Certified Solutions Architect Associate (Saa-C03)
    Irving, Tx
  • Ajay B

    Senior Business Analyst
    United States

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.