Bay Area/ Oakland
Published on May 17, 2017
Hoodline Is Hiring Data EngineersCome work for Hoodline.

We’re looking for data engineers who can help us dig into all the public and private data sets to which we have access. We’re figuring out how cities really work today, then making predictions about how they’ll look in the future. 

About Hoodline

We partner with 200+ publishers and do our own reporting to discover insights for every location. We offer a local content platform that provides relevant nearby articles, photos and videos for any app or site, to help any user make a better decision about anything—from where to eat lunch, to where to live. 

We began life as a neighborhood news site that's become a main source for local news in the San Francisco Bay Area today. It's also where we showcase what we’re doing with data. Check out our recent analysis of one neighborhood’s retail business vacancy issues, or where you can find the 50 best donut shops across the country. 

Working with us

If you have the right skills and share our mission, you’ll be able to help solve local through data, addressing major city-level content and insights discovery problems around the world—while having fun and building a great business, too.

We offer:

  • Strong compensation
  • Early equity
  • Product ownership and a ton of responsibility

Essential responsibilities:

  • Design, build and maintain efficient & reliable data pipelines
  • Create and maintain frameworks to support data integrity for the pipelines
  • Troubleshoot any performance, system or data related issues, and work to ensure data consistency and integrity
  • Aggregate disparate data sources for efficient retrieval by a broad range of applications

Ideal qualifications:

  • A deep passion for working with data and developing software to address data processing challenges
  • Strong technical understanding of data modeling, design, architecture principles, and techniques to take business requirements from concept to implementation
  • Experience working with open source technologies like Kafka, Hadoop, Hive, Presto, and Spark
  • Experience with data warehousing services like Google BigQuery and RedShift
  • Experience with SQL and Python and / or Java/Scala
  • Bachelor's, Master's or PhD degree in Computer Science or equivalent experience

For more information and to apply, head over here. For all open positions at Hoodline, check here. Got questions? Email us at [email protected].