- Develop and maintain data pipelines to collect and store large amounts of data from various sources
- Design and build data storage and management systems such as data warehouses and data lakes
- Perform data cleansing, transformation, and integration tasks to ensure data quality and consistency
- Improve the efficiency and reliability of data processing through monitoring, error handling, and performance tuning of data pipelines
- Implement systems using appropriate programming languages and tools and collaborate with other developers to integrate solutions
- Over 3 years of experience in the relevant field
- In-depth understanding of technologies used for processing and storing large-scale data, such as Hadoop, Spark, Kafka, SQL, and NoSQL
- Experience in developing and maintaining data pipelines and ETL processes
- Email your resume to email@example.com
- Use the following email title format:
[Application] (REC-year-num) Your_Name
[Application] (REC-2023-08) John_Wick.
- Make sure that your application code (
REC-year-num) is correct.
- Attach your resume in PDF format.
- Feel free to include any necessary information in your resume. However, be concise and to the point.
- Application (접수): 2023/07/17 ~ 2023/07/28
- Screening (서류심사): ~ 07/31
- Online interview (온라인인터뷰): 07/31 ~ 08/04
- Onsite interview (현장면접): 08/07 ~ 08/18
- Decision (결과): ~ 08/25