Darshan K

Darshan K Email and Phone Number

Sr. Python and ML Engineer @ Jefferies
Irvine, CA, US
Darshan K's Location
Irvine, California, United States, United States
About Darshan K

Python developer with 5 years of experience in finance and logistics. Well-acquainted with database engineering, data warehouses, and data processing. Knowledgeable in Python frameworks related to data such as Angular.js  AWS (S3 bucket, EC2) Mongodb  Database: SQL CI/CD Pipeline – Kubernetes  PyUnit Microservices  Django Spark (PySpark) Kafka AWS Lambda AWS GLUE Html, CSS, JavaScript, JSON, JQuery Jenkins

Darshan K's Current Company Details
Jefferies

Jefferies

View
Sr. Python and ML Engineer
Irvine, CA, US
Website:
jefferies.com
Employees:
7751
Darshan K Work Experience Details
  • Jefferies
    Sr. Python And Ml Engineer
    Jefferies
    Irvine, Ca, Us
  • Spotify
    Sr. Python Developer
    Spotify Jan 2021 - Present
    • Constructed the AWS data pipelines using VPC, EC2, S3, Auto Scaling Groups (ASG), EBS, Snowflake, IAM, Cloud Formation, Route 53, CloudWatch, Cloud Front, and Cloud Trail.• Worked on data pipelines like Kafka and Goblin and also Built ETL data pipelines on Hadoop/Teradata using Hadoop.• Create logical and physical data models using Erwin to meet the needs of the organization's information systems and business requirements.• Used several python libraries familiarity with Python, Numpy and Matplotlib, and MySQL dB for database connectivity) and IDEs - Spyder, PyCharm. Also Created a database using MySQL, and wrote several queries and Django APIs to extract data from the database.• Involve in the design of APIs for the networking and cloud services and Design and development of the application and • Developed API to Integrate with Confidential EC2 cloud-based architecture in AWS, including creating machine Images.• Extensively involved in developing and consuming web services/API’s/micro-services using requests library in python,• Developed Spark code using Python and Spark-SQL for faster testing and data processing and Spark Streaming for high-speed data processing.• Involved in Real-time predictive analytics capabilities using Spark Streaming, Spark SQL, and Oracle Data Mining tools.• Designed and developed Flink pipelines to consume streaming data from Kafka and applied business logic to massage and transform and serialize raw data.• Designed MySQL dB for database connectivity and Cassandra to improvise software development process Also created and built out Automated Test Using Robot Framework. Written test cases using PyUnit and Selenium Automation testing for better manipulation of test scripts.• Designed architecture of the application Tree/Node architecture interaction with mortgage database and pool files loan attribute abstraction data API’s sorting and searching algorithms tree graph output GUI demo etc.
  • Pfizer
    Python Developer
    Pfizer Jan 2019 - Dec 2020
    • Maintained the end-to-end vision of the data flow diagram and develop logical data models into one or more physical data repositories.• AWS CI/CD Data pipeline and AWS Data Lake using EC2, AWS Glue, and, AWS Lambda Also Built ETL data pipeline on Hadoop/Teradata using Hadoop.• Performed Forward Engineering data models for Schema generation, Reverse Engineered on the Existing Data Models to accommodate new requirements.• Designed the schema, configured and deployed AWS Redshift for optimal storage and fast retrieval of da, ta, and used park SQL and MLlib libraries.• Consumed Rest based Microservices on Restful APIs and designed and tested jQuery, HTML and L, and CSS that meets the web browser standards.• Reviewed transferring data from different data sources into HDFS systems usiKafkafka producers, consum, ers and Kafka brokers.• Worked on designing and developing POCs in Spark using Python to compare the performance of Spark with Hive and SQL.• Collected data using Spark Streaming from AWS S3 bucket in near - real-time and performs necessary Transformations and Aggregation on the fly to build the common learner data model and persists the data in HDFS.• Develop & deploy enterprise-based applications using, Map Reduce, Spark (Streaming, Spark SQL), Storm, and Kafka.• Developed a common Flink module for serializing and deserializing AVRO data by applying schema. ALSO Worked on Data pipelines using python for medical image pre-processing, Training, and Testing. • Involved with querying on data present in Cassandra cluster using CQL (Cassandra Query Language) also measured the performance of Cassandra cluster.• Used python panda’s module to read CSV files to obtain data and store the data in data structures provided in the NumPy module.• Utilized PyUnit, the Python Unit test framework, for all Python applications and used Django Database APIs to access database objects.
  • Discover Financial Services
    Software Engineer
    Discover Financial Services Jan 2017 - Dec 2018
    Assisted senior developers in translating basic client requests into HTML code.Built interactive and dynamic single page applications using Angular through API handling .Designed web application using Angular.js, using which client and server could communicate.Innovation and creation of company software and programs.Worked collaboratively with the design team to understand end-user requirements to provide technical solutions and for the implementation of new software featuresCollaborated with web application engineers, used python scripts to load the data into AWS Cloud Cassandra databaseWork with the database developers, software developers, and Development Lead, to architect and implement new features and solutions for customersDeveloped and supported database architecture and development to create and enhance the Enterprise Applications.Used AWS for deploying and scaling web applications and services developed with Python.Assisted senior developers in translating basic client requests into HTML code.Interpreted the API requirements set by the stakeholders to produce the desired product that met their business requirements.Designed new interfaces for existing APIs and efficiently manage all the enterprise API through a well-documented service catalog.Assist in the development of technology road maps to evolve the API estate in conjunction with internal and external solution providersDevelop s/w against given designs or specifications using the Django framework.Set up full CI/CD pipelines so that each commit a developer makes will go through the standard process of the software life-cycle and get tested well enough before it can make it to production.  Created script for data modeling and data import and export. Extensive experience in deploying, managing, and developing MongoDB clusters.Integrated with Kafka and worked on monitoring and troubleshooting the Kafka

Frequently Asked Questions about Darshan K

What company does Darshan K work for?

Darshan K works for Jefferies

What is Darshan K's role at the current company?

Darshan K's current role is Sr. Python and ML Engineer.

Who are Darshan K's colleagues?

Darshan K's colleagues are Steven Tubb, Anne Jones, Kal Kaflng, Thomas Mclaughlin, Amour Remi, Logan Shanney, Mahmoud Omar.

Not the Darshan K you were looking for?

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.