Data Platform Architect
We are seeking a Data Platform Architect who will leverage a best practice approach in use of the platform capabilities to integrate systems, master data sets, and work in partnership with the Analysis teams in provision of healthcare data and analysis output. The role will support the evolution of the data platform as new data domains are introduced, and leverage big data services to solve complex data challenges faced by Abacus’ health plan customers. The Data Platform Architect is a customer facing role, accountable for the end-to-end customer deployment as well as ownership of the entire technical customer engagement including: architectural design sessions, implementation of projects and/or Proofs of Concepts, production code development and reviews. The ideal candidate will have experience in customer facing roles and success in leading deep technical architecture and design discussions with senior executives, and able to be hands-on to show the depth of their experience and expertise.
Job Responsibilities & Duties
The role requires a strong work ethic and will be responsible for the following key responsibilities:
The Data Platform Architect will:
- Lead and direct clients towards optimal data architectures on AWS, Azure, GCP and hybrid environments
- Support pre- and post-sales processes with respect to data model information domain, connector, and mastering/record linking designs, estimates, assumptions and scope
- Create and maintain data integration and connector pipelines. Proactively manage and monitor data integration/orchestration flows, and scheduled tasks. Create and maintain templates and best practice integration and connector examples that will be re-used by users of the data platform. Create and maintain APIs for sharing datasets.
- Support the technical development and evolution of Abacus domain and master data models for specific data domains. Understand the data quality of mastered data and provide technical guidance for Data Managers and Data Stewards. Evolve data quality and data master rules to support achieving business benefit.
- Provide specialist technical support to Data Scientists and the Data Analyst communities in the creation and use of data sets. Provide technical support in cataloguing of data sources. Ensure analytics model insights / visualizations are available for appropriate staff and Partners.
- Design, coordinate and execute data-related pilots, prototypes or proof of concepts, provide validation on specific scenarios
- Evolve the physical implementation of Abacus NoSQL and SQL databases
- Implement and scale the Abacus NoSQL for optimal insert append processing and read query access
- Collaborate closely with application team architects and engineers to identify technologies and platforms suitable for their big data processing requirements, and then assist those teams with onboarding, development, deployment, and debugging on those platforms
- Investigate new big data tools and technologies for their potential application to common use cases; establishing best practices, developing design patterns, and writing documentation to disseminate new capabilities to a broad technical audience; working with platform engineers and product managers to specify and deliver new major technology features
- Provide technical assistance to a broad community of big data infrastructure users, such as software application engineers and data scientists, through research, investigation, collaboration, and hands-on debugging, often driven by specific use case requirements
- Ensure that application big data solutions adhere to best practices and enterprise standards for scalability, availability, efficiency, data life-cycle management, information security, fault tolerance, and disaster recovery
Job Qualifications & Preferred Skills/Experience
- 5+ years of experience with deep understanding in databases and analytics, including relational databases (e.g., SQL Server, MySQL, Oracle), NoSQL databases (Cassandra, HBase), Data warehousing, big data (Hadoop, Hive, Spark), NoSQL, and business analytics
- Demonstrated data driven problem solving approaches
- Demonstrated experience training and operationalizing machine learning or deep learning models
- Experienced with Data Integration, Data Migration, Master Data management or Data Science software.
- Deep NoSQL data modeling experience
- Deep experience in designing commercial-level integration connectors
- Demonstrated track record in distributing and scaling large NoSQL databases
- Strong experience in master data management, record linking and classification
- Developed, designed or managed Extract, Transform and Load (ETL) routines
- Experienced with data wrangling and sanitization of data
- Defined or worked with a variety of data formats – e.g. csv, xml, json, structured and unstructured
- Experienced with a wide range of protocols and data transfer methods e.g. batch, real-time, near real-time – REST, SOAP, Web Services
- Presentation skills with a high degree of comfort with both large and small audiences
Highly Desirable Requirements:
- Worked in an Agile manner: Hands on experience following agile and test-driven development
- Implemented modern engineering, DevOps and CI/CD programs, best practices and disciplines for clients
- Previous work designing/managing data structures within a data warehouse
- Operational experience of metadata and reference data management
- Developed and implemented application modernization, refactoring, and migration approaches to transition applications workloads from On-Premises to Cloud
- Hands on experience leading large-scale global data warehousing and analytics projects.
- Demonstrated industry leadership in the fields of database, data warehousing or data sciences.
- Track record of implementing AWS services in a variety of distributed computing, enterprise environments.
- Minimum of a Bachelor’s degree (Computer Science, Electrical/Mechanical Engineering, Information Systems, and related majors)
Negotiable based on experience and qualifications.