Startup Careers

Be a part of our family by contributing to our portfolio companies’ innovation and success. Browse open positions below with Intel Capital portfolio companies.

Data Engineer at BrightEdge
Foster City, CA, US
As the industry pioneer behind Content Performance Marketing, BrightEdge has thoroughly redefined the concept of search engine optimization (SEO) by developing an award-winning platform that precisely measures and optimizes marketing content across online channels. Our cloud-based platform is powered by big data analysis that allows our customers to plan, optimize, and measure campaigns based on real-time content performance. BrightEdge has emerged as the leading international provider of cloud-based SEO Enterprise solutions due to its dynamic and results oriented entrepreneurial culture.
We, at BrightEdge, are a late -stage startup who owns our market. We create our market-beating SEO (Search Engine Optimization) platform using tools like Python, Hadoop, Scala, Impala, Docker, and more. We offer an opportunity to work with world-class technology in an engineering team that "gets it".
BrightEdge is looking for a Data Engineer to join our Engineering team. The ideal candidate will be someone who is passionate about the performance of our platform in terms of measuring, monitoring, and improving it. In other words, this job entails to diagnose and deliver. Slow queries stick in your head all afternoon, bothering you until you have that "aha!" moment as you rush back to implement your solution.
You will collaborate with product managers and support on projects to find data insight. As well as, optimize slow performance, connecting the dots and making recommendations. You will help the team automate and continuously improve data gathering, reporting, and analytics. The Data Engineer will help build tools to increase the efficiencies.
If you've worked with Splunk, StackDriver, or Tableau, that's a huge plus. Knowing PowerBI or Google Data Studio is an advantage in this role, because sharing what we learn from your monitoring is critical. If you can dive into Hive or analyze the data from our Hadoop cluster with R, so much the better.
Put the things you've learned into practice with us. You'll be hard pressed to find a better place to do it.

Minimum Requirements

    • A four-year degree with background Computer Science / Statistics / Engineering
    • One to three years of technical reporting or analytics experience

You Should Be Someone That

    • Is excited by the possibility of stretching your skillset and learning from the best minds in the industry.
    • Has experience helping users with your debugging and troubleshooting skills.
    • Knows how to measure and monitor capacity and throughput of high-performance systems, including creating the reports to share that information.
    • Can spot an un-optimized database query from a mile away and love to squeeze the last bit of performance out of a query.
    • Has deep experience with build and test infrastructure in Python-based frameworks.
    • has worked with Splunk, StackDriver, or Tableau is a huge plus.
    • Can dive into Hive or analyze the data from our Hadoop cluster with R, is also a huge plus
    • Knows PowerBI or Google Data Studio is also an advantage in this role (sharing what we learn from your monitoring is critical)

Personal Qualities

    • Excellent business acumen and solid technology understanding
    • Interest in learning new skills and keeping pace with changing technologies
    • Ability to problem solve and work with others to find the best solution
    • Precision in your work and attention to all details
    • Capability to work independently and self-motivate