Eduardo Vladimir

Eduardo Vladimir Email and Phone Number

Data Engineer
Eduardo Vladimir's Location
Brazil, Brazil
About Eduardo Vladimir

15 years of experience as a software engineer, business intelligence consultant, data engineer, big data engineer, etc. Python, Google Cloud, Big Query, GCS, AWS (Amazon Web Services), S3, EC2, Athena, Glue, Cloudera, Hive, Impala, Spark, HDFS, Kafka, Airflow, Docker, Cloud and Distributed Computing, Data Warehousing.

Eduardo Vladimir's Current Company Details

Data Engineer
Eduardo Vladimir Work Experience Details
  • Uber
    Data Engineer
    Uber Jul 2020 - Jul 2024
    • Responsible for the data engineering of Machine Learning related data within the Safety team.• Requirement gathering for the data needs for existing and new products within Safety.• Ingestion and modelling of safety-related data, providing all the information necessary for the team to generate insights about various on-going Safety and Machine Learning initiatives.• Responsible for Data Quality in the Safety Data Team.• Usage of Uber's infrastructure in order to design and implement processes which allows the data team to quickly evaluate data quality, identify issues, ownership, root causes, and the need for alert and SLA tuning.• Coordination of Data On-Call meetings, guaranteeing the stability of data quality and proper functioning of data pipelines for the Safety Data team.• Python, Airflow, Presto, Hadoop stack (HDFS, Hive, Spark). Uber's internal tools and frameworks.
  • Amaro
    Data Engineer
    Amaro Dec 2019 - Jun 2020
    • Responsible for the company’s data engineering, architecture and modelling. • Maintenance of data infrastructure and development of new features, ingestion pipelines and data models.• Usage of Cloudformation for the creation of an infrastructure for data integration, from DynamoDB streams into S3, then into Snowflake through snowpipes and into Looker.• Requirement gathering, data ingestion, dimensional data modelling, reporting. • RDS (Sql Server and MySQL), external sources, Cloudformation, DynamoDB, S3, API Gateway, Stitch, Snowflake, Snowpipes, Looker, Python, SQL.
  • Cabify
    Data Engineer
    Cabify Jul 2018 - Nov 2019
    • Maintenance of corporate data pipelines and development of new functionalities.• Development of Kafka consumers for capturing data from several topics.• Creation and maintenance of Docker images which execute the Kafka consumers.• Development of ETL routines using python, spark (databricks) and BigQuery.• Data storage into BigQuery, Databricks and AWS S3.• Crawlers and Jobs using AWS Glue. Queries using AWS Athena.• Routine scheduling using Apache Airflow.• Version control using Git.• Design and planning for the new corporate data architecture.• Development of feature store in real-time (streaming) for calculating fraud analysis score, using Google Cloud Pub/Sub, Google Cloud Dataflow, Apache Beam, Python and Big Table.
  • Semantix
    Big Data Engineer
    Semantix Dec 2016 - Jul 2018
    • Implementation of the project Service Level Indicators at Banco Santander, measuring the effectiveness of the customer support area.• Creation of Hive database and table structures. Development and automation of ETL rountines using Hive, Python, Spark and Shell Script. Development of KPI dashboard using Cognos Analytics.• Implementation of Meu Desconto project at Grupo Pão de Açúcar, partnering with Dunnhumby, allocating offers to relevant customers.• https://digitalks.com.br/noticias/case-gpa-como-melhorar-a-experiencia-do-consumidor-usando-dados-e-mobile/• Development of the Allocation Engine, using Python, Spark and Hive. Association of different offers, with distinct objectives, to the most relevant customers through relevance score, generated through the use of pre-calculated relevant features.• Implementation of data ingestion routines at Elo Cartões, helping with the implementation of the first Cloudera environment with PCI Compliance of Latin America. Cloudera's success case.• https://www.cloudera.com/more/customers/cartao-elo.html • Creation of data ingestion framework, routines and data marts for credit card transaction authorization and liquidation using Hive, ShellScript, Sqoop and Oozie.• Implementation of data integration project at Votorantim Energia. First Snaplogic project in Brazil.• Implementation of middleware for SOAP/REST communication between systems, databases and APIs (Salesforce, SAP, Serasa, Amazon S3) using Snaplogic.• Implementation of One Source of Data project at Cargill, with the objective of integrating multiple data sources into a Hadoop environment.• Development of ETL routines from many data sources using Spark and Impala. Architecture design, development of ingestion framework, requirement gathering, activiry planning and backlog creation.
  • Arbit
    Business Intelligence Consultant
    Arbit Jul 2013 - Dec 2016
    • Consultant for Sales Analytics project for Intel Corporation. • Daily Scrum meetings with US, Costa Rica and India teams.• Tabular model construction using Analysis Services.• BI solutions development and integration with Sharepoint• Report development using Reporting Services and IBM Cognos.• Querying for information analysis and reporting using DAX, MDX and SQL.• IBM Cognos report customization with javascript.• IBM Cognos content administration and security.• Rules and processes creation using IBM Cognos TM1.
  • Leega
    Business Intelligence Consultant
    Leega Jun 2011 - Jun 2013
    • Analysis and development of reports and dashboards using IBM Cognos and Microstrategy.• Development of Extraction, Transformation and Load (ETL) projects using SQL Integration Services.• Development of POCs (proofs of concept). • Development of ETL routines, reports, analysis, dashboards and administration management using Dynamic Data Web.• Multidimensional data modelling, project specification document elaboration, ETL routine modelling, report and dashboards prototyping with Power Designer.• Use of documentation and methodology for Business Intelligence projects.• SQL Server, Oracle and DB2 for querying and data manipulation.
  • Grupo Aval Toledo Piza
    Development Analyst
    Grupo Aval Toledo Piza Dec 2009 - May 2011
    • Implementation, maintenance and debugging of enterprise system code using Visual Basic 6.0 and Crystal Reports.• Development of business reports using ad-hoc queries.• Data manipulation.• Data import and export using SQL Server 2000, 2005 and 2008.• User support level 2.

Eduardo Vladimir Education Details

  • Faculdade De Informatica E Administracao Paulista - Fiap
    Faculdade De Informatica E Administracao Paulista - Fiap
    Mba In Big Data (Data Science)
  • Faculdade De Informática E Administração Paulista - Fiap
    Faculdade De Informática E Administração Paulista - Fiap
    Sistemas De Informação
  • Etec Camargo Aranha
    Etec Camargo Aranha
    Computing

Frequently Asked Questions about Eduardo Vladimir

What is Eduardo Vladimir's role at the current company?

Eduardo Vladimir's current role is Data Engineer.

What schools did Eduardo Vladimir attend?

Eduardo Vladimir attended Faculdade De Informatica E Administracao Paulista - Fiap, Faculdade De Informática E Administração Paulista - Fiap, Etec Camargo Aranha.

Not the Eduardo Vladimir you were looking for?

Free Chrome Extension

Find emails, phones & company data instantly

Find verified emails from LinkedIn profiles
Get direct phone numbers & mobile contacts
Access company data & employee information
Works directly on LinkedIn - no copy/paste needed
Get Chrome Extension - Free

Aero Online

Your AI prospecting assistant

Download 750 million emails and 100 million phone numbers

Access emails and phone numbers of over 750 million business users. Instantly download verified profiles using 20+ filters, including location, job title, company, function, and industry.