Back to all jobs


Data Architect

Role: Full-Time

Location: Minneapolis, MN

Salary: $150k

Essential Duties and Responsibilities:

  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies
  • Design and build production data pipelines from ingestion to consumption within a big data architecture, using AWS native or custom programming
  • Perform detail assessments of current state data platforms and create an appropriate transition path to AWS cloud
  • Create and maintain optimal data pipeline architecture, and identify ways to improve data reliability, efficiency and quality
  • Use programming languages to construct and maintain the proposed architecture and enable data to be searched and retrieved efficiently
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Troubleshoot the system and solve problems across all platform and application domains, and have the soft skills to be able to explain complex data to others in the organization
  • Create and document processes, outlining how other data professionals in the team will harvest, authenticate, and model the information
  • Stay current with new technology options and vendor products, evaluating which ones would be a good fit for the company
  • Perform other duties and responsibilities as assigned


Required Qualifications:

  • Associate’s degree or High school diploma/GED with 5 years of Information Systems experience
  • 5+ years of experience working with Data Warehouse, Data Lake, and Data Mart concepts
  • 3 + years of hands on experience in Python & SQL
  • 2+ year of AWS Big Data and Analytics services experience
  • Familiar with AWS Ecosystem and Services such as: S3, RedShift/Spectrum, Lambda, Glue, EMR, Athena, DMS, Cloud Formation


Preferred Qualifications:

  • Bachelor’s degree
  • Professional Certifications, preferably in AWS
  • Strong knowledge of design, creation, interpretation, and management of large datasets to achieve business goals
  • 3+ years of experience designing and developing data processing pipelines using AWS Cloud and big data technologies
  • Proficient in one or more of the coding languages, preferably Python
  • Proficient in AWS Services like EC2, S3, Redshift/Spectrum, Glue, Athena, RDS, Lambda, and API gateway
  • Proficient in writing SQL using any RDBMS (Redshift, SQL Server, Oracle, etc.)
  • Experience on disparate file formats like Parquet, AVRO, ORC, CSV and JSON
  • Experience with source controls systems, DevOps, CI/CD implementations
  • Experience with building solutions to integrate with data visualization tools like QuickSight
  • Working knowledge of software best practices across development life cycle including agile methodologies in a lean-agile environment

We’re an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status.

    • Location: Anywhere
    • Date posted: