Nani T Email and Phone Number
I have over 8 years of experience in designing and developing analytics and big data applications for Fortune companies. My expertise spans the full spectrum of data engineering, from designing and configuring Hadoop ecosystem components like Hadoop MapReduce, HDFS, HBase, Hive, Sqoop, Pig, Zookeeper, and Flume to managing Snowflake data warehousing environments.I am well-versed in software development life cycles, having worked in both Waterfall and Agile methodologies. My knowledge extends to Tableau server commands, Python, Django, Flask, Zope, and Pyramid Framework. I have a track record of successfully delivering data solutions on the Azure cloud platform, leveraging services like Azure Data Factory, Azure Databricks, and Azure SQL Data Warehouse.Proficiency in a variety of ETL tools, including Apache Nifi, Apache Spark, Talend, and Informatica, enables me to design and manage data pipelines efficiently. I specialize in integrating data from diverse sources, ensuring data consistency and accuracy. In the AWS ecosystem, I have experience with services like EC2, EMR, S3, KMS, Kinesis, Lambda, and more. I have designed and managed data infrastructure on AWS, emphasizing scalability and high availability.My expertise also extends to data modeling, ETL development, and data warehousing, ensuring the availability of high-quality, real-time data for business intelligence needs. I possess strong programming skills in Python, NumPy, Scikit-Learn, TensorFlow, Keras, BERT, Prophet, Seaborn, and Plotly.Furthermore, I have hands-on experience with data integration, data cleansing, and analysis using various technologies like HiveQL, Pig Latin, and custom MapReduce programs in Java. I am committed to understanding and translating business requirements into feasible software deliverables. I understand HDFS designs, daemons, federation, and high availability (HA).Feel Free to connect with me on LinkedIn to discuss opportunities and explore how my skills and experiences can contribute to your organization's growth and success.
Micron Technology
View- Website:
- micron.com
- Employees:
- 20793
-
Senior Data EngineerMicron Technology Oct 2021 - PresentBoise, Idaho, United StatesI led the development of a complex data pipeline project, orchestrating various data operations and transformations. This pipeline encompassed actions like file movement, Sqoop data extraction from Teradata and SQL sources, and data export into Hive staging tables. I collaborated on ETL tasks, ensuring data integrity and pipeline stability, with a focus on data retrieval from file systems using Spark commands.Python was central to our project, with code written for exploratory data analysis using machine learning libraries like Scikit-learn, NumPy, Pandas, and Matplotlib. We implemented Snowflake's COPY and INSERT statements, integrating Snowpipe for real-time data ingestion and transforming raw data into actionable insights.Our team also leveraged Cloud services for data ingestion and transformation and building a pipeline. We employed Python scripts for data cleansing, mapping, aggregation, and quality reporting, facilitating effective data management.Throughout the project, I utilized a diverse technology stack, including Python, PySpark, Docker, Kafka, and various data formats, to deliver a robust and efficient data pipeline solution. -
Date EngineerAgfirst Farm Credit Bank Jul 2019 - Sep 2021Columbia, South Carolina, United StatesSpearheaded a transformative project focused on optimizing data analysis and performance within a financial institution. Central to the initiative was the design and implementation of robust data processing pipelines leveraging Apache Spark for enhanced efficiency. Harnessing Spark's capabilities such as Spark Context, Spark Sessions, Data Frame, and Pair RDD's, the team developed scalable solutions for data transformation and analysis.In addition to Spark, advanced cloud technologies, particularly Cloud-based services, were utilized to bolster scalability and performance. Integrating Cloud HDInsight and other cutting-edge components ensured seamless adaptation to evolving data volumes and business requirements. This involved fine-tuning ETL workflows and optimizing resource allocation to maximize processing capabilities.Expertise extended to various data storage and processing technologies, including Hadoop, NoSQL databases such as MongoDB, HBase, and Cassandra, and real-time data ingestion frameworks like Kafka and Flume. Additionally, RESTful middleware was engineered to facilitate seamless communication between components, and distributed messaging systems were implemented for efficient data flow. -
Data EngineerQuotient Technology Inc. Aug 2017 - Apr 2019IndiaI spearheaded a multifaceted project, leveraging the Hadoop ecosystem through Cloudera Manager to unlock actionable insights from big data. Our focus on cost efficiency led us to optimize data engineering on AWS, utilizing tools like AWS Cost Explorer and Trusted Advisor. We adeptly collected and processed vast log data using Apache Flume, stored it in HDFS, and developed a versatile Python API for enhanced data processing.Within the Hadoop ecosystem, we employed a spectrum of technologies including MapReduce, Spark, Hive, Pig, Sqoop, HBase, Oozie, Impala, and Kafka. I played a pivotal role in transforming Hive/SQL queries into Spark transformations, using Spark RDDs, Python, and Scala. Data quality was our bedrock, with rigorous ETL validation processes ensuring accuracy and integrity.We delved into machine learning, experimenting with ensemble methods and integrating Cassandra with Spark and Scala. User-friendly web interfaces were created using Django, while automation streamlined RabbitMQ cluster installations and IoT data ingestion into Kafka. We also authored Kafka producers for seamless data streaming from external REST APIs. -
Data AnalystMayo Clinic Aug 2015 - Jul 2017IndiaImplemented R and Shiny applications for business forecasting, alongside predictive models in Python and R to analyze customer behavior and classify customers. Leveraged SQL Server Integration Services (SSIS) for seamless data extraction, transformation, and loading from multiple sources. Proficient in various R and Python packages like ggplot2, caret, dplyr, pandas, NumPy, and Seaborn for data manipulation and analysis. Conducted data cleansing using Excel functions and performed data retrieval operations from file systems to S3 using Spark commands. Utilized AWS services for data management and migration, including AWS Lambda and Amazon RDS, and developed healthcare dashboards in Tableau for real-time insights.Documented complete process flows and developed reports supporting consultants in process improvements, performing analytical analysis for insights and identifying future areas of improvement. Prepared reports using MS Excel functions like VLOOKUP, pivot tables, and Macros. Leveraged AWS Redshift, S3, Spectrum, and Athena services for efficient querying of large datasets stored on S3 to create a Virtual Data Lake without the need for extensive ETL processes.
Nani T Education Details
Frequently Asked Questions about Nani T
What company does Nani T work for?
Nani T works for Micron Technology
What is Nani T's role at the current company?
Nani T's current role is Actively looking out for opportunities | Data Engineer | Analytics | Bigdata | Azure | AWS | Python | Hadoop | Spark | SQL | DBT | Matillion | Snowflake | Databricks | Fabric | Power BI | ETL | Informatica | Kafka ||.
What schools did Nani T attend?
Nani T attended Jntuh College Of Engineering Hyderabad.
Who are Nani T's colleagues?
Nani T's colleagues are 廖世豪, Shawn Morrissey, Thein Than Khaing, Audrey Kaur, Jason, Qing Wei Tan, Max Kuo, 黃文誠.
Not the Nani T you were looking for?
-
Nani Babu T
Project Lead Having 13+Years Of Experience In Field Project Management In Supply And Site Execution Of Boilers, Static Equipments (Heat Exchangers,Pressure Vessels) For Power Plants, Refineries,Fertilizers & Iron PlantsHyderabad
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial