Specialist Solutions Architect

2 minute read

This role can can be remote and we are open to candidates located anywhere in the Continental United States and Canada.

As a Specialist Solutions Architect (SSA), you will guide customers in building big data solutions on Databricks that span a large variety of use cases. You will be in a customer-facing role, working with and will support the Solution Architects, that requires hands-on experience with Apache Spark™ and expertise in other data technologies. SSAs help customers through design and implementation of essential workloads while aligning their technical roadmap for expanding the usage of the Databricks Lakehouse Platform. As a deep go-to-expert reporting to the Specialist Field Engineering Manager, you will continue to strengthen your technical skills through mentorship, learning, and internal training programs and establish yourself in an area of specialty – whether that be performance tuning, machine learning, industry expertise, or more.

The impact you will have:

  • Provide technical leadership to guide customers to successful implementations on big data projects, ranging from architectural design to data engineering to model deployment
  • Architect production level workloads, including end-to-end pipeline load performance testing and optimization
  • Become a technical expert in an area such as data management, cloud platforms, data science, machine learning, or architecture
  • Assist Solution Architects with more advanced aspects of the technical sale including custom proof of concept content, estimating workload sizing, and custom architectures
  • Provide tutorials and training to improve community adoption (including hackathons and conference presentations)
  • Contribute to the Databricks Community

What we look for:

  • 5+ years experience with expertise in at least one of the following big data technologies:
    • Software Engineer/Data Engineer: query tuning, performance tuning, troubleshooting, and debugging Spark or other big data solutions
    • Data Scientist/ML Engineer: model selection, model lifecycle, hyperparameter tuning, model serving, deep learning, etc.
    • Data Applications Engineer: Build out use cases that extensively utilize data – such as risk modeling, fraud detection, customer life-time value, etc.
  • Experience with design and implementation experience in big data technologies such as Hadoop, NoSQL, MPP, OLTP, and OLAP or full lifecycle data science solutions
  • Maintain and extend production data systems to evolve with complex business needs
  • Production programming experience in Python, R, Scala or Java
  • Deep Specialty Expertise in at least one of the following areas:
    • Experience with scaling big data workloads that are performant and cost effective
    • Experience with Development Tools for CI/CD, Unit and Integration testing, Automation and Orchestration, REST API, BI tools and SQL Interfaces. E.g. Jenkins
    • Experience designing data solutions on cloud infrastructure and services, such as AWS, Azure, or GCP utilizing best practices in cloud security and networking
    • Experience with ML concepts covering Model Tracking, Model Serving and other aspects of productionizing ML pipelines in distributed data processing environments like Apache Spark, using tools like MLflow
    • Experience implementing industry specific data analytics use cases
  • [Desired] Degree in a quantitative discipline (Computer Science, Applied Mathematics, Operations Research)
  • {Nice to have} – Databricks Certification
  • Available to travel up to 30% when needed

Benefits:

  • Comprehensive health coverage including medical, dental, and vision
  • 401(k) Plan
  • Equity awards
  • Flexible time off
  • Paid parental leave
  • Family Planning
  • Gym reimbursement
  • Annual personal development fund
  • Work headphones reimbursement
  • Employee Assistance Program (EAP)
  • Business travel accident insurance
  • Mental wellness resources

Apply Here