Successful candidates will be part of a small team with deep expertise in state-of-the-art Distributed Systems. They will complement our current expertise with their knowledge and broad experience in Dynamic Distributed Data Platforms. Creating cutting edge systems to enable Machine Learning methods and High Performance Algorithms.
Required skills & experience:
Highly proficient in writing elegant and efficient code in Java and Scala.
3+ years of work experience on distributed systems.
Broad knowledge of modern web frameworks, languages and protocols.
Knowledge of one or more "big data" technologies:Storm, Spark, Akka, Shark
Use of and experience with main components from the Hadoop ecosystem.
Knowledge of code development on Linux or OS X and resource and system optimization on Linux.
Experience in any of the following is desirable but optional:
Real-Time/Streaming analytics systems
Knowledge of cloud technologies
Service Oriented Architectures, Web Services; including REST
Proficiency in Clojure, Python, or Objective C.
Knowledge of Databases and data warehousing platforms
Responsibilities:
Implement Machine Learning Applications on mobile devices and servers, physical as well as virtualized infrastructure (both public and private).
Creating scalable systems software to enable the results of distributed high performance machine learning algorithms to be delivered between multiple endpoints across the network.
Need to be involved in all aspects of product design, planning, and implementation. (We want to incorporate your ideas)