Over 12 years of IT experience in the areas of Analysis, Design, Development, Coding, Implementation and Testing of web based multi-tier applications using Java/J2EE technologies.Experience leading data engineering initiatives, implementing solutions for RDBMS, NoSQL databases like redis, hbase, maprdb, marklogic, etc.Experience with development of Spark jobs using Spark framework on top of MapR DB and hive tables using Spark SQL queries, performance tuning, scheduling and maintenance of the jobs using Control M, Airflow Dag and manually using Spark submit command on cluster.Experience in working on AWS services such as AWS Lambda, S3, EMR, Glue, Athena, RDS, IAM.Experience in working on Marklogic data ingestion, transformation to XML/JSON and data extraction using Xquery (CTS:Search and Xpath).Experience in XML technologies including XML, XSLT, XPath, DTD, XML-Schema, XML Beans, JAXB, SAX and DOM parsers.Experience on Web services like RESTful, SOAP UI and WSDL elements and knowledge in SOA.Experience with servlet containers/application servers such Tomcat, Weblogic, WebSphere, JBoss.Knowledge in ORM frameworks such as Hibernate.Extensive programming in Java, J2EE using Eclipse, RAD and JBuilder tools.Experience in development using Kafka and good knowledge in IBM MQ.Experience in development of microservices using spring boot.Experience in Hbase API, MapR API.
-
Lead Software EngineerOclc Apr 2024 - PresentDublin, Oh, Us -
Senior Software EngineerNabors Industries Jun 2022 - Mar 2024Houston, Texas, Us• Lead data engineering initiatives, implementing solutions for RDBMS, NoSQL databases.• Successfully led the team to design, develop and generate MapR-DB volumes, tables dynamically & historically and load the data into Mapr-DB.• Collaborated with different teams to gather requirements, analyze data needs to design efficient solutions.• Used Apache Spark with Scala to design, develop, and generate data for various reports.• Imported and read data from parquet files on mapr cluster, performed transformations using spark and generated data for various reports.• Successfully led the migration of a critical application from Tibco Streambase to Java.• Used Springboot for the development of the code using core java and oops concepts for retrieving the data.• Used Spring framework along with REST API for the development of the code for generating Rig Performance metrics and saved into MapR DB tables.• Used Airflow Dags to schedule and trigger the REST API call.• Extensively used Akka Actors architecture for scalable and multi-threading. Millions of activity messages per second were handled very easily by the actors by propagating the messages to appropriate child actors in asynchronous or synchronous fashion.• Integrated the Akka framework with Redis and Kafka within the Springboot environment, ensuring seamless communication and data flow for enhanced application performance.• Used Apache Drill for querying the tables in MapR DB.• Developed multiple POC’s for the development of the code using Akka, Redis and Kafka using Springboot.• Used Redis as a key-value storage for storing the different aggregation and N, N-1 data.• Developed Cache application specific data in in-memory data clusters like Redis and read them with Restful endpoints by using Redis Cache.• Created decision tables using Drools framework as part of the development.• Involved in analysis, design, testing phases and responsible for documenting technical specifications. -
Big Data DeveloperCas Apr 2020 - Jun 2022Columbus, Ohio, Us• Utilized AWS services such as Lambda, Athena, Glue, EMR and S3 for data processing and storage.• Configured and maintained the Hadoop environment on AWS EMR. • Involved in converting Hive/SQL queries into Spark transformations using Spark RDDs.• Implemented Spark applications to read sequence files from AWS S3 and processed the data, loaded asParquetfile format into S3.• Imported data from AWS S3 into Spark RDD, performed transformations and actions on RDD's.• Implemented the Lambda functions using Python to create the EMR cluster, deploy and execute the spark application.• Utilized AWS EMR for Spark clusters, optimizing performance and scalability for big data processing workflows.• Created Triggers for Lambda function for the above process.• Created AWS Glue jobs to transfer data from AWS Athena to RDS underlying MySQL, PostgreSQL, and Aurora DB.• Installed Collibra Edge on AWS EC2 and integrated Collibra DGC Catalog with AWS Athena, Glue, S3, RDS and PostgreSQL.• Managed and administered MarkLogic NoSQL database instances to ensure optimal performance, high availability, and data integrity.• Configured and tuned MarkLogic clusters, including node configuration, failover settings, and database replication.• Implemented and optimized complex XQuery queries to extract, transform, and load data in MarkLogic.• Developed multiple POCs using Java, Scala, Python and deployed on the AWS EMR cluster, performed the bench-marking process to understand the performance of Spark application.• Developed Glue crawlers and created tables using Glue and queried using AWS Athena.• Developed unit tests for Spark jobs using Scala. -
Java/Big Data DeveloperL Brands May 2017 - Apr 2020Columbus, Oh, Us• Designed and developed Microservices using Springboot and Kafka for the delivery of software products across the enterprise.• Used Kafka tool and Confluent Kafka for creating topics and importing messages into the Kafka topics.• Created Map RDB tables for the development of the code. • Used HBase API for the CRUD operations with MapR DB tables.• Created Hive tables, developed and modified HQL queries to solve the issues reported by QA.• Used Spark framework for the development of spark jobs over MapR YARN to perform analytics on data in Hive.• Scheduled and triggered the Spark jobs using Control M.• Involved in converting Hive/SQL queries into Spark transformations using Spark RDDs, and Java.• Worked on Hadoop cluster and big data analytic tools including Hive, Spark and Sqoop.• Exported the analyzed data to Teradata using Sqoop for visualization and to generate reports.• Involved in data conversion, data migration, data mapping, data validation and data analysis.• Developed multiple POCs using Java and deployed on the Yarn cluster, compared the performance of Spark with Hive and SQL/Teradata.• Developed an automation testing framework API for testing the spark jobs and microservices automatically.• Developed Junits for both Microservices and Spark jobs.• Involved in analysis, design, testing phases and responsible for documenting technical specifications.• Developed Kafka producer and consumers, HBase clients and Spark jobs along with components on HDFS, Hive. -
Sr Java DeveloperAccenture Jan 2013 - Apr 2017Dublin 2, Ie• Trained the offshore team to understand the business logic and get them upto the speed.• Used Spring JDBC framework for the development of Batch jobs and interfaces. Developed code using core java and oops concepts for retrieving the data.• Used Control-M for scheduling and executing the batch jobs and interfaces.• Developed custom tags to add extra functionalities to JSP. Used JavaScript for the development of the pages.• Used Spring IoC to couple different layers of application such as web, business and DAO layers.• Implemented User Interface in Model-View-Controller Architecture, which accomplishes a tight and neat co-ordination of Spring MVC, JSP, Servlets and JSTL.• Developed SQL queries with JDBC API to create, retrieve and update data.• Designed and developed data access layer using Data Access Object (DAO) and Singleton design patterns. Developed domain objects and DAO classes using Spring JDBC.• Configured Hibernate to make use of second level caching to display the static lookup data from the database.• Developed the TANF, SNAP, and Medicaid programs application to provide Application Registration, Data Collection and Eligibility Determination for the applicants.• Involved in the development for eligibility with MMIS Medicaid management information system.• Used IBM RTC as a workflow automation tool for bug tracking system, creating a change request, maintenance request, assigning a defect and tracking them.• Used SOAP UI and WSDL to communicate over internet. -
Java DeveloperLead It Corporation Jan 2012 - Dec 2012Springfield, Illinois, Us• Involved in designing the UML diagrams based on the requirements from the BA team using RSA.• Involved in understanding of the POC (Proof of Concept) code. Involved in the development of the code with reference to the POC code.• Worked extensively on the JRF, BOF, MAX frameworks. Worked extensively on the SEED framework based on understanding of SPARK framework.• Worked on Presentation layer used JSP, SFX, CSS, Servlets, Spring and Java Beans library. Used Tiles for the development of user interface.• Created test cases and used JUnit to do the unit testing to track the defects and to close the defects.• Modified SQL, stored procedure and functions for performance enhancement.
Gopi K Education Details
-
University Of South AlabamaElectrical And Electronics Engineering -
Osmania UniversityElectronics And Communications Engineering
Frequently Asked Questions about Gopi K
What company does Gopi K work for?
Gopi K works for Oclc
What is Gopi K's role at the current company?
Gopi K's current role is Lead Software Engineer.
What schools did Gopi K attend?
Gopi K attended University Of South Alabama, Osmania University.
Free Chrome Extension
Find emails, phones & company data instantly
Aero Online
Your AI prospecting assistant
Select data to include:
0 records × $0.02 per record
Download 750 million emails and 100 million phone numbers
Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.
Start your free trial