51 Global Data jobs in Saudi Arabia
Big Data Specialist
Posted today
Job Viewed
Job Description
Qualifications
- Candidate must have experience in the area of Big Data Analytics with 7-8 years of experience.
- Proficient with the Big Data Engineering Stack preferably on Cloudera, Hive, Spark, Iceberg.
- Sound knowledge of the Big Data Frameworks like Map Reduce, Hive, Spark.
- Knowledge of the open-source scheduling Frameworks like Apache Airflow and Apache NiFi.
- Hands on experience in SQL, Python, Java and one database like Oracle, Teradata or MySQL is required.
- Sound Understanding of the Linux fundamentals and shell scripting is required.
- Familiarity with Kafka, MinIO, Golden Gate is preferred.
- Experience with Telco will be considered as plus.
- Bachelor's degree in Computer Science, Data Science, or a related field.
- 7+ years of experience working with big data technologies (Hadoop, Spark, Kafka).
Big Data Engineer
Posted today
Job Viewed
Job Description
Purpose
Design and run scalable big‑data pipelines to consolidate and govern master/reference data, integrating the MDM/RDM Hub with enterprise systems while ensuring accuracy, timeliness, security, and observability.
Key Responsibilities
- Design, develop, and implement integration pipelines to consolidate master/reference data from multiple sources into a central platform.
- Ensure smooth data flow across systems while preserving accuracy, consistency, and timeliness.
- Design models for master/reference data aligned with business needs; develop and maintain schemas/artifacts for large datasets.
- Monitor and apply data‑governance policies to sustain accuracy and continuous improvement.
- Operate and optimize the MDM hub for centralization and synchronization of master data (publishing, subscriptions, APIs).
- Implement privacy, security, and quality controls; enable auditability, change tracking, and duplication reduction.
- Manage the full data lifecycle (create, maintain, update, deprecate/purge) for master/reference domains.
- Control operations during changes with traceability and recovery; ensure continuity and rollback plans.
- Build observability and monitoring for health, performance, and quality of master/reference data systems.
- Use the big‑data stack (Spark, Airflow, Hive, Impala, Kudu, Iceberg) to implement and monitor pipelines and performance.
- Apply encryption and protection for sensitive data in accordance with the data‑classification policy; integrate metadata/lineage with CDGC.
Must‑Have Qualifications
- Bachelor's in CS/IS/Engineering (or equivalent).
- 4+ years building data platforms with strong PySpark/Scala, Airflow, SQL, and distributed data processing.
- Hands‑on experience with Hive/Impala, Parquet, table formats (e.g., Apache Iceberg), and performance tuning.
- Proficiency in CI/CD, Git, unit/integration testing, and production support on Linux.
- Experience integrating with MDM/RDM hubs and consuming/publishing via APIs or message streams.
- Experience connecting big‑data pipelines with IDMC (CDI/Mass Ingestion, CDQ, CDGC).
- Arabic is a must.
Big Data Administrator
Posted today
Job Viewed
Job Description
We are looking to hire a Big Data Administrator to be the cornerstone of our data ecosystem. You'll work closely with data engineers and scientists to ensure our data flows seamlessly from source to insight.
Qualifications:
- Saudi Nationality Only.
- Graduated from the Faculty of Computer Science or any relevant field.
- 1-3 years of experience as a Big Data Administrator, Cloudera Data Platform Administration, or similar role.
- Good understanding of the Cloudera data warehouse.
- Strong knowledge of RedHat, Ansible, and database fundamentals.
- A proactive, problem-solving mindset with the ability to work autonomously.
- Excellent communication skills and a true team player.
- Relevant certifications (e.g., Cloudera CCA, RedHat Admin1 & 2)
If interested, please send your updated CV to
Email subject:
Big Data Administrator
Big Data Consultant
Posted today
Job Viewed
Job Description
Responsibilities:
- Provide Big Data strategy consulting to enterprises and government organizations.
- Build and manage BI & Data Analytics Centers of Excellence to drive organizational data maturity.
- Advise on data governance, security, and compliance frameworks.
- Design and implement scalable Big Data solutions across industries.
- Bachelor's degree in Computer Science, Software Engineering, or related field.
- Minimum 7 years of experience in Big Data architecture, strategic planning, and consulting.
- Proficiency in Big Data technologies, ML, Data Modeling, ETL, Python, AI Integration, Gen AI, Agentic AI, OSINT, GEOINT.
- Strong leadership and advisory skills for enterprise-scale projects.
Cloudera Big Data Engineer
Posted today
Job Viewed
Job Description
Job Title: Cloudera Big Data Engineer
Employment Type: Full-time
Role Overview
We are seeking a highly skilled
Cloudera Big Data Engineer
with
5+ years of hands-on experience
in building and managing large-scale
data pipelines
using
Apache NiFi
,
Apache Kafka
, and the
Cloudera Data Platform (CDP Private Cloud)
. The ideal candidate will have deep expertise in
batch
and
real-time streaming
architectures, and will play a critical role in designing, implementing, and optimizing robust ingestion and processing frameworks to power enterprise analytics and AI/ML initiatives.
Key Responsibilities
Data Pipeline Design & Development
- Design, implement, and maintain
end-to-end data pipelines
for both
batch
and
real-time
data ingestion using Apache NiFi and Kafka. - Ingest data from heterogeneous sources (databases, files, APIs, message queues, IoT devices) into the Cloudera ecosystem.
- Build and manage
Kafka topics, producers, and consumers
to enable low-latency streaming data flows. - Implement complex
data routing, transformations, validation, and enrichment
logic within NiFi flows and Kafka streams.
Platform Engineering & Integration
- Configure and manage data ingestion components within
Cloudera Data Platform (CDP)
, including NiFi, Kafka, HDFS, and Hive. - Integrate pipelines with downstream layers such as Hive, Impala, Kudu, or analytical databases to enable analytics and AI workloads.
- Develop
metadata-driven
, reusable ingestion frameworks to accelerate new data source onboarding.
Optimization & Monitoring
- Optimize NiFi flows and Kafka configurations for
high throughput, scalability, and fault tolerance
. - Implement comprehensive
monitoring and alerting
using Cloudera Manager, NiFi provenance, and Kafka metrics. - Troubleshoot complex ingestion and streaming issues across environments (DEV, UAT, PROD).
Governance, Security & Compliance
- Ensure all pipelines adhere to
data governance
,
lineage
, and
security
standards in line with
PDPL
,
NCA
, and organizational frameworks. - Apply best practices in
access control
, encryption, data quality checks, and auditing.
Collaboration & Enablement
- Work closely with
data architects
,
data modelers
,
data scientists
, and
business analysts
to deliver clean, reliable, and timely data. - Document flows, create reusable templates, and conduct
knowledge transfer sessions
to upskill internal teams.
Required Qualifications
- Bachelor's degree in
Computer Science
,
Information Systems
, or a related field (Master's is a plus). - 5+ years
of proven experience as a
Data Engineer
working on
big data ecosystems
, preferably on
Cloudera CDP (Private Cloud)
. - Expert-level skills in:
- Apache NiFi
: flow design, scheduling, controller services, provenance tracking, error handling, templates. - Apache Kafka
: topic creation, schema management, producer/consumer implementation, stream processing. - Building and maintaining
batch & streaming
data pipelines in production-grade environments. - Solid experience with
HDFS
,
Hive/Impala
,
Linux
, and shell scripting. - Good understanding of
data security
,
lineage
, and
governance
frameworks. - Strong scripting skills (e.g., Python, Bash, Groovy) for automation and orchestration.
- Proficiency with
Cloudera Manager
or similar cluster management tools.
Preferred Skills
- Cloudera certifications (e.g.,
CDP Data Engineer
,
CDP Administrator
) are highly desirable. - Experience in
real-time analytics
and
IoT ingestion
scenarios. - Familiarity with
CI/CD for data pipelines
, Git, and DevOps practices. - Domain experience in
telecom
,
financial services
, or
government
is a plus.
Soft Skills
- Strong analytical and problem-solving mindset.
- Excellent communication skills (English; Arabic is a plus).
- Ability to work independently and in cross-functional teams.
- Proactive, structured, and delivery-focused in complex environments.
Big Data Integration Expert
Posted today
Job Viewed
Job Description
- Integrate Big Data solutions with enterprise IT systems and cloud platforms.
- Conduct open-source integration projects for data lakes, warehouses, and pipelines.
- Design and develop end-to-end data warehouse solutions and custom Big Data applications.
- Implement robust data pipelines and ETL workflows for real-time and batch processing.
- Bachelor's degree in IT, Computer Science, Software Engineering, or related field.
- Minimum 7 years of experience in Big Data integration and implementation.
- Technical expertise in Big Data frameworks, ML, Data Modeling, ETL tools, Python, AI Integration, Gen AI, Agentic AI, OSINT, GEOINT.
- Strong experience with system integration, APIs, and distributed computing environments.
Big Data Staffing Consultant
Posted today
Job Viewed
Job Description
- Recruit, screen, and place Big Data engineers, data analysts, and BI specialists.
- Design and deliver training programs for Big Data technologies and best practices.
- Provide Big Data manpower solutions tailored to project and organizational needs
- Recruit, screen, and place Big Data engineers, data analysts, and BI specialists.
- Design and deliver training programs for Big Data technologies and best practices.
- Provide Big Data manpower solutions tailored to project and organizational needs
Requirements
- Bachelor's degree in Business, IT, Computer Science, Software Engineering, or related field.
- Minimum 5 years of experience in Big Data staffing, resource management, and training.
- Familiarity with Big Data ecosystems, ML, Data Modeling, ETL, Python, AI Integration, Gen AI, Agentic AI, OSINT, GEOINT.
- Excellent talent acquisition and training program management skills.
Be The First To Know
About the latest Global data Jobs in Saudi Arabia !
Big Data Support Expert
Posted today
Job Viewed
Job Description
- Provide ongoing maintenance and support for Big Data platforms and BI systems.
- Ensure scalability, reliability, and high availability of Big Data infrastructures.
- Troubleshoot Big Data issues including ETL pipelines, data modeling, and integration problems.
- Offer annual Big Data & BI support contracts for proactive system monitoring and upgrades.
- Bachelor's degree in Computer Science, Software Engineering, or related field.
- Minimum 5 years of experience in Big Data support and maintenance services.
- Hands-on knowledge in Big Data tools, ML, Data Modeling, ETL pipelines, Python, AI Integration, Gen AI, Agentic AI, OSINT, GEOINT.
- Strong troubleshooting and performance optimization expertise.
Big Data Service Manager
Posted today
Job Viewed
Job Description
- Offer managed services contracts for Big Data solutions, ensuring availability and scalability.
- Implement Big Data infrastructure and management systems for large-scale operations.
- Provide data migration, transformation, and integration services across platforms.
- Optimize and manage Big Data storage solutions to reduce cost and improve performance.
- Bachelor's degree in Computer Science, Software Engineering, or related field.
- Minimum 5 years of experience in Big Data service management, infrastructure, and automation.
- Expertise in Big Data platforms, ML, ETL processes, Python, AI Integration, Gen AI, Agentic AI, OSINT, GEOINT.
- Strong experience with data monitoring, service-level agreements (SLAs), and automation tools.
Big Data R&D Consultant
Posted today
Job Viewed
Job Description
- Conduct research and development of innovative Big Data analytics techniques.
- Explore advanced approaches for handling and analyzing large, complex datasets.
- Investigate advancements in distributed computing frameworks (e.g., Hadoop, Spark, Flink).
- Address nation-scale challenges by applying advanced Big Data methodologies.
- Continuously develop and deliver Big Data Proof of Concepts (PoCs) to demonstrate feasibility and value.
- Bachelor's degree in Computer Science, Software Engineering, or related field.
- Minimum 10 years of experience in Big Data R&D.
- Strong expertise in Big Data platforms, ML, Data Modeling, ETL, Python, AI Integration, Gen AI, Agentic AI, OSINT, GEOINT.
- Proven track record in data science research, distributed systems, and prototyping.