
Senior Data DevOps Engineer
- Wimbledon, South West London
- Permanent
- Full-time
- Ownership and leadership of the CI/CD pipeline, ensuring a stable and performant platform/process.
- Serve as a technical mentor for DevOps engineers.
- Drive continuous improvement in infrastructure, automation, and deployment strategies.
- Proactively identify and resolve bottlenecks, scaling challenges, and systemic issues.
- Effective communication with Operations, Development, and Data teams.
- Maintain and improve processes and documentation.
- Provide end-to-end solutions, from design to execution.
- Champion a DevOps culture and advocate best practices across teams.
- Provide support and guidance to engineering teams on tooling and infrastructure.
- Setup and evolve tools and infrastructure for scalability.
- Automate processes wherever feasible.
- Lead incident management and root cause analysis.
- Support and act in line with company values, compliance, and GRC obligations.
- Ensure that you, and your team, adhere to the Governance, Risk & Compliance (GRC) obligations within your direct responsibility and control.
- Ensure any non-compliance incidents within your team are raised through the appropriate channels (Compliance Incidents Process) and that your team are informed of any reporting processes relevant to them.
- Challenge processes, policies and projects that will negatively impact compliance within the Group.
- Ensure your team's completion of all mandatory compliance trainings within the set deadline.
- Reach out to the Compliance Teams if unsure of any of your compliance obligations or the requirements are unclear.
- Extensive experience with AWS services: EKS, EC2, S3, RDS, Redshift, DynamoDB, Lambda, EMR, Karpenter, Route53, IAM, etc.
- Strong experience designing and optimizing CI/CD pipelines (Jenkins, ArgoCD).
- Proficient in scripting and automation using Ansible, Python, Bash.
- Deep hands-on experience with Docker and container orchestration (Kubernetes, Docker Swarm).
- Familiar with Vault, Artifactory for secrets management and artifact delivery.
- Proficient with Git and Bitbucket for source control.
- Strong experience with infrastructure-as-code tools, especially Terraform.
- Exposure to monitoring and observability tools like Prometheus, Grafana, Kibana.
- Familiarity with tech like Couchbase, Elasticsearch, Oracle, MSSQL, PostgreSQL, Kafka.
- Experience with data environments: Airflow, EMR, SageMaker, Ray, TensorFlow, MLflow, Kubeflow, Dask, Flink, Flask, KServe.
- Familiarity with data lakes and warehouse solutions: Snowflake, BigQuery, Redshift.
- Exposure to PostgreSQL, DynamoDB, Kafka, Kafka Streams, Kafka Connect.
- Experience with OpenStack, Azure, GCP alongside AWS.
- Experience maintaining pipelines and tools like Jenkins, GitLab, ArgoCD, SonarQube, Artifactory, Vault.
- Knowledge of monitoring/APM tools: Kibana, CloudWatch, Loki, OpenTelemetry, Splunk, Elastic.
- Experience maintaining configurations in distributed, multi-tenant environments.
- Occasional out-of-hours conferencing or maintenance work.
- Supportive, inclusive workplace aligned with Kindred's Diversity & Inclusion values.
- Learning & Development opportunities tailored to your career growth.
- Agile environment with cutting-edge technology.
- Collaborative culture with highly motivated, knowledge-sharing colleagues.
- Ability to prioritize and remain focused under competing demands.
- Excellent interpersonal, communication, and stakeholder engagement skills.
- Strong time management and organizational abilities.
- Collaborative mindset with a people-first focus.
- Proven track record of gaining cross-team cooperation and commitment.