Do you want to switch to Platzi in English?


Cargo: Big Data - Data Engineer/Developer - MongoDB

Mexico City - 1 año, 3 meses ago

Empresa: Skills: Distribute, store, and process data in a Hadoop cluster Write, configure, and deploy Apache Spark applications on a Hadoop cluster Use the Spark shell for interactive data analysis Process and query structured data using Spark SQL Use Spark Streaming to process a live data stream Use Flume and Kafka to ingest data for Spark Streaming

Ver datos de contacto