top of page

n11 | Product Owner

September 2020 - Now

As a product owner I have these responsibilities;
* Doing Analysis of the products (technical & business)
* Deciding the roadmap of the products I work on and creating the backlog, following the tasks in Jira and task prioritization.
* Writing documentation. Anything related to the products & projects can be found on Confluence and excel sheets.
* Discussing the new features with other teams (UX, Marketing, Sales) that can be added to the projects.
* Discussing technical structures of the projects with developers.
* Talking with stakeholders about the projects.
* Presenting the projects.
* Answering any question related to the projects I work on.
* If necessary I create presentations, draw architecture, write technical requirements... of the products

Projects I work on;

* Marketing Automation Project; The main objective of this project is automating all marketing campaigns in one application.
* Clickstream Event Collection; We re-wrote the real-time clickstream event collection from the beginning. There are more than 50 events implemented into the pipeline, from click to impressions. I work with other teams to analyze which events can be collected based on their needs and what can be done with these events in scenario based. 
* Real-time data warehouse; This is a huge project but in the what we are creating is, capturing the changes from databases in real time and storing the changes in BigQuery. 
* Recommendation widgets analysis; In n11 apps there are multiple recommendation widgets and I am responsible for analysis of these widgets. (I am working with other product owners in n11 on this project)

HAY | Data/Software Engineer

September 2019 - September 2020

https://hellohay.co/

HAY is an Australian Neobank. Headquarter is in Australia and technology team is in United Kingdom. Before this job I didn’t worked with backend teams a lot. And while working with platform team to build micro-services for bank I learned a lot. I mean a lot. Because before this job I worked in the data side. After working in the backend side, connecting data and backend architectures became much easier. Until the beginning of the 2020 I worked only in the platform team then I started working with the data team also. But in time my data side responsibilities increased and now I am working only in the data team.

If you are in Australia you can open your account in less than 5 minutes.

Language: Java

Technologies - Frameworks: Spring Boot, Kafka, Kubernetes, Postgresql, Redis, Docker

TRENDYOL | Data Engineer

January 2019 - September 2019

https://www.trendyol.com/

As data engineer I had different responsibilities in Trendyol. One of them was maintaining and creating new batch jobs written in Spark with Scala. Also creating real time pipelines with Flink and other technologies. My other responsibility is working as researcher in data science projects. As I mentioned below I started my data journey as a data scientist. Even though I like engineering more than science jobs, I still like working in the data science projects.

Languages: Scala, Python

Technologies - Frameworks: AWS EC2, EMR, S3, EKS, Apache Spark, Flink, Kafka, Couchbase, Kafka, Hive, Tensorflow, Keras, Docker

INSIDER | Data Engineer

August 2018 - January 2019

https://useinsider.com/


If they ask me what is the best thing of Insider, I would say feedback mechanism. The second best thing would be the adaptation of agile methodologies and the team. Actually, I feel lucky because all the teams I was part of in all the companies I worked for were really good. I can't separate one from another.

Languages: Scala

Technologies - Frameworks: AWS EC2, EMR, S3, Kinesis, Apache Spark, Redis, Hive, Elasticsearch, Akka

Milliyet Newspaper | Data Engineer

November 2017 - August 2018

I started my data journey in Milliyet Newspaper. As a data scientist intern first thing I learned was using Apache Spark. Before this job I tried to work with Apache Spark but working with real world data was so different then the datasets we can find online.

After some analysis I did with the data I have, I learned how to use AWS services such as Kinesis and S3. In some point I wanted to learn about data engineering and after I got deeper in the data engineering world, my mind has changed. Data science is really cool but I liked engineering part more. And I continued my data journey as a data engineer in my next job.

In the 4th month in here I started working as part time.

Languages: Scala

Technologies - Frameworks: AWS EC2, EMR, S3, Kinesis, Apache Spark

Filika Tasarım | Software Engineer Intern

April 2017 - September 2017

https://www.filikatasarim.com/v2/

This is my first job. And one of the best first jobs I could have ever found. Filika is like a studio and they always encouraged me to do anything. Never limited me and they taught me a lot.

I worked as a software engineer intern and helped to build interactive products for the events. I used C++ and Openframeworks almost all the time but sometimes I also used python for simple jobs. I worked with Rasberry Pi and Arduino.

The owners of the Filika makes music with code. I think this just by itself explains the creativity of the Filika.

Raw Live Coding

bottom of page