Syntelli Solutions is a fast growing practice line and is hiring aggressively within the big data analytics and data science space.

In addition to our passion around Performance Management, we are singularly committed to running our business on ‘principles’. We offer value based consulting, and the value of our services do not go down towards year ends! We follow a consultative sales process, respect our consultants, and take great pride in the solutions we design.

Majority of our clients are in telecom, media, retail, energy (oil & gas, utilities), manufacturing, professional services, hospitality and healthcare and have revenues from $100M to $2BN. Some of our clients are Nike, ADP, FedEx, Positec, McKesson and many others.

Syntelli Solutions Inc. is an equal opportunity/affirmative action employer.

Want to take your career to the next level in advanced analytics and data science?

Contact Us


Current Positions in Advanced Analytics & Data Science

Data Science Analyst


Role Description:


We are looking for a Data Scientist to partner with our clients to aid them in better using their data. Our ideal candidate should have a deep understanding of customer behavior analysis, segmentation and predictive modeling, as well as strong communication skills. Candidate must also have experience in some form of SQL (e.g. HIVE, PostgreSQL, MSSQL, etc.) and either Python or R for statistical application development. Experience deploying Data Science Solutions in a Hadoop (e.g. Hortonworks, MapR, Cloudera) and/or Cloud (e.g. AWS, Azure) environment a plus.


Responsibilities:



  • Develop and communicate a deep understanding of client needs, perform analytical deep-dives to identify problems, opportunities and specific actions required

  • Develop reproducible and deployable statistical solutions  on platform such as R/Python/Spark using techniques such as Multi-Level Regression, SVM, and Neural Networks

  • Efficiently access data via multiple vectors (e.g. NFS, FTP, SSH, SQL, Sqoop, Flume, Spark)

  • Design experiments to maximize insights while minimizing error

  • Work with cross-functional teams (including Marketing, Product Management, Engineering, Design, Creative, and senior executives) to rapidly execute and iterate potential solutions


Basic Requirements:



  • 3+ years with relevant Analytics experience

  • 1+ year relevant Data Science experience

  • Proven record of successful statistical product delivery

  • Deep understanding of statistical and data analysis techniques such as propensity modeling, segmentation, media mix modeling, customer 360, etc.

  • Ability to execute marketing science techniques via statistical applications such as R or Python

  • Significant experience with SQL and working with large datasets required

  • Strong verbal and written communication skills

  • BS / MS / PhD in quantitative field a plus

  • Certifications in AWS and/or Azure a plus


Data Science Architect


Role Description:


We are looking for a Big Data Science Architect to lead Data Science product development initiatives for our clients. Our ideal candidate should have a deep understanding of Big Data Science solutions and system architecture, development best practices, data governance protocols, running statistical processes at scale, as well as strong communication skills.


This role is the full lifecycle of data science- leading people, blind and lost in a world without statistics and data, into the brilliant, sparkling truth of high-speed statistics at scale. This Architect, and a team of equally motivated Data Scientists, will craft solutions in environments such as Azure and AWS, using services ranging from basic Azure Data Factory and HD Insights to hand coded Spark Streaming and Node.js.


There are some requirements. Our Architects must have experience designing and deploying “Full-Stack” Data Science Solutions in a Hadoop (e.g. Hortonworks, MapR, Cloudera) and/or Cloud (e.g. AWS, Azure) environment, including: Collaborative Solution Design with multiple stakeholders; prediction/classification via Multi-Level Bayesian MCMC and/or distributed TensorFlow, VectorFlow, MlLib; Data Ingestion; Data Storage; Data Transformations and Modeling; Data Access Patterns; Security; Version Control; and ideally exposing services via Restful API.


Responsibilities:



  • Develop and communicate a deep understanding of client needs, collaborate with client on new approaches for using Data Science help their business

  • Design “Full-Stack” Data Science Solutions/Products including Ingestions, Storage, Prediction, and Access layers within existing client infrastructure (Cloud or on-premise)

  • Provide recommendations on infrastructure improvements to better support Data Science

  • Oversee Development and personally develop reproducible and deployable statistical solutions  in languages such as C# and R/Python (with Spark) using techniques such as Multi-Level Regression, SVM, and Neural Networks

  • Efficiently access data via multiple vectors (e.g. NFS, FTP, SSH, SQL, Sqoop, Flume, Spark)

  • Design experiments to maximize insights while minimizing error

  • Work with cross-functional teams (including Marketing, Product Management, Engineering, Design, Creative, and senior executives) to rapidly execute and iterate potential solutions


Basic Requirements:



  • 5+ years with relevant Data Science/Analytics experience

  • 3+ year relevant Application Development experience

  • Proven record of successful statistical product delivery

  • Deep understanding of statistical and data analysis techniques

  • Significant experience with Azure and/or AWS and working with large datasets required

  • Strong verbal and written communication skills

  • BS / MS / PhD in quantitative/CS field a plus

  • Certifications in AWS and/or Azure a plus


Data Scientist


We are looking for a Data Scientist to partner with our clients to aid them in better using their data. Our ideal candidate should have a deep understanding of customer behavior analysis, segmentation and predictive modeling, as well as strong communication skills. Candidate must also have experience in some form of SQL (e.g. HIVE, PostgreSQL, MSSQL, etc.) and either Python or R for statistical application development. Experience deploying Data Science Solutions in a Hadoop (e.g. Hortonworks, MapR, Cloudera) and/or Cloud (e.g. AWS, Azure) environment a plus. Please email Jason.crane@syntelli.com


Responsibilities:



  • Develop and communicate a deep understanding of client needs, perform analytical deep-dives to identify problems, opportunities and specific actions required

  • Develop reproducible and deployable statistical solutions  on platform such as R/Python/Spark using techniques such as Multi-Level Regression, SVM, and Neural Networks

  • Efficiently access data via multiple vectors (e.g. NFS, FTP, SSH, SQL, Sqoop, Flume, Spark)

  • Design experiments to maximize insights while minimizing error

  • Work with cross-functional teams (including Marketing, Product Management, Engineering, Design, Creative, and senior executives) to rapidly execute and iterate potential solutions


Basic Requirements:



  • 3+ years with relevant Analytics experience

  • 1+ year relevant Data Science experience

  • Proven record of successful statistical product delivery

  • Deep understanding of statistical and data analysis techniques such as propensity modeling, segmentation, media mix modeling, customer 360, etc.

  • Ability to execute marketing science techniques via statistical applications such as R or Python

  • Significant experience with SQL and working with large datasets required

  • Strong verbal and written communication skills

  • BS / MS / PhD in quantitative field a plus

  • Certifications in AWS and/or Azure a plus


Ab Initio/ ETL Developer


We are seeking a talented Ab Initio/ ETL Developer. Are you experience with 837 and want to work for a Great Organization. Please email your resume to Jason.crane@syntelli.com


Requirements:


7 or more years of IT development/programming/coding professional work experience or equivalent combination of transferrable experience and education.


· Bachelor’s degree in an IT related field or equivalent work experience


· Strong skills in various ETL Tools especially in Abinitio and exposure to other ETL Tools is a plus-Talend, Spark and Boomi.


· Experience in various RDBMS DB2, Oracle, SQL Server with strong development skills in SQL, PL/SQL, Procedures


· Experience and understanding with unit testing, release procedures, coding design and documentation protocol as well as change management procedures


· Proficiency using versioning tools


· Thorough knowledge of Information Technology fields and computer systems


· Demonstrated organizational, analytical and interpersonal skills


· Flexible team player


· Ability to manage tasks independently and take ownership of responsibilities


· Ability to learn from mistakes and apply constructive feedback to improve performance


· Must demonstrate initiative and effective independent decision-making skills


· Ability to communicate technical information clearly and articulately


· Ability to adapt to a rapidly changing environment


· In-depth understanding of the systems development life cycle


· Proficiency programming in more than one object oriented programming language


· Proficiency using standard desktop applications such as MS Suite and flowcharting tools such as Visio


· Proficiency using debugging tools


· High critical thinking skills to evaluate alternatives and present solutions that are consistent with business objectives and strategy


· Experience mentoring / or leading other development staff


· Proven leadership abilities including effective knowledge sharing, conflict resolution, facilitation of open discussions, fairness and displaying appropriate levels of assertiveness


· Analytical and detail oriented


Preferred Criteria


· Experience using Agile methodology


· Health care experience


· Hadoop knowledge


· Working experience in 837


Pega Developer


Our organization is growing and looking for additional resources in the Pega Development. If you ware looking for a great organization with outstanding benefits please contact us. Jason.crane@syntelli.com


Description:


Strong Technical and Functional Knowledge:


· Pega System development toolset


· Pega 7.1 Certified


· Experience in implementing, architecting and designing solutions using case management


Other Qualifications:


· 5+ years’ experience in Pega system development/Support, implementing systems solutions to business problems


· Prefer 2+ year’s of hands-on/technical Pega 7.1 programming experience


· Demonstrate project management skills


· Software development experience


· Experience with Case Management, Business Services, JDE, Master Data Management.


· Develop and enforce standards and best practices


· Review and understand the best approach to implement requirements; communicate the same to the project team; Raise conflicts, Complications, and issues in the requirements; provide sizing estimates


· Lead design sessions towards the successful implementation of more complex requirements; define the class structure of data and work objects


· Mentor and oversee work of junior team members


· Perform appropriate tasks required to initiate a project based on the version of PRPC being used (the Application Profiler/ Accelerator or Stage Configuration)


· Oversee the proper configuration of the application security model


· Configure complex integrations and services


· Perform complex system configuration task; configure authentication services; oversee the Rule Set Versioning strategy and execution


· Configure more complex HTML/JSP/Java Script components where necessary


· Health Care business knowledge is a plus


· Requirements: Looking for a Pega 7 Certified developer

Hadoop Developer


Duties & Responsibilities




  • Hadoop development and implementation.

  • Loading from disparate data sets.

  • Pre-processing using Hive and Pig.

  • Designing, building, installing, configuring and supporting Hadoop.

  • Translate complex functional and technical requirements into detailed design.

  • Perform analysis of vast data stores and uncover insights.

  • Maintain security and data privacy.

  • Create scalable and high-performance web services for data tracking.

  • High-speed querying.

  • Managing and deploying HBase.

  • Being a part of a POC effort to help build new Hadoop clusters.

  • Test prototypes and oversee handover to operational teams.

  • Propose best practices/standards.


Qualifications:



  • Minimum 5 Years of Application Development Experience in Java.

  • 5 years designing and developing Enterprise-level data, integration, and reporting/analytic solutions. Proven track record of delivering backend systems that participate in a complex ecosystem

  • Minimum 3 years development experience on Hadoop platform including PIG, Hive, Sqoop, Hbase, Flume, Spark and related tools.

  • Minimum 3 years professional experience designing and developing BI/Bigdata applications.

  • Experience with Hadoop 2.0+ and Yarn applications

  • Proven experience with data modeling, complex data structures, data processing, data quality, and data lifecycle

  • Current knowledge of Unix/Linux scripting, solid experience in code optimization and high performance computing.



Additional Skills:



  • Experience in messaging and collection frameworks like Kafka, Flume, or Storm.

  • 3+ years of distributed database experience (HBase, Accumulo, Cassandra, or equivalent).

  • Knowledge in Big Data related technologies and open source frameworks preferred.

  • 2 – 5 years hands-on experience with the Hadoop stack (e.g. MapReduce, Pig, Hive, Hbase)

  • Experience in integrating heterogeneous applications is required. Experience orchestrating complex data flows is preferable.

  • Deep understanding and ability to use SQL, XML, JSON and UNIX are required.

  • Experience designing and supporting RESTful Web Services is required

  • Demonstrated experience in Java Enterprise ecosystem is required

  • Knowledge in various Open Source tools and technologies in Java Enterprise ecosystem is required

  • Minimum 5 years professional experience designing and developing applications on one operating system (Unix or Windows) or designing complex multi-tiered applications.

  • Minimum of 3 years work experience as a developer is desirable

  • Has experience working with at least 3 business applications/systems and has also provided tier 4 production support.

  • Certification in Hadoop preferred


Location can be Charlotte, NC or Dallas/Houston, TX.

Big Data System Admin


Duties & Responsibilities:


  • Administer, Install and support many databases, including Oracle, DB2, SQL Server, Sybase, Teradata, Netezza, Vertica, cloud databases and MySQL

  • Manage and support several Hadoop physical clusters, running Cloudera, Hortonworks, MapR, Apache, Pivotal, for product and sales engineering support.

  • Perform installation and support for O/S Virtualization software for a variety of software vendors, including: HP, IBM, VMWARE and Oracle.

  • Manage operating system updates, patches, and configuration changes for production servers.

  • Provide very responsive support for day to day requests from product sales, development, support and professional services teams.

  • Responsible for documenting procedures. Performance analysis and debugging of slow running production and development build and regression testing processes.

  • Management of Disk storage for all servers (SAN Administration) Management of ESX clusters Manage database licenses and renewals Understanding of the following: Software design principles, operating system design, database systems and concepts.

  • Networking; Ethernet and TCP/IP, routing, DNS, etc. UNIX and Windows shell scripting. MS Windows, Linux and UNIX operating systems. Virtualization software, especially VMware. Plus: Experience with learning and understanding all the major Hadoop distributions and toolsets running on Hadoop.

  • Understanding and experience with all the major database vendors software: Oracle, DB2, Teradata, SQL Server, MySQL

  • Experience with Virtualization software : VMWare Vsphere, ESXi, Virtualbox, KVM DevOps and DevSecOps, particularly Chef server and Vagrant, Powershell DSC,

  • Docker Server provisioning with Razor or Dell OpenManage AWS or other Cloud management experience


Hadoop/Spark Developer


Syntelli is always on the lookout for exceptional talent. Join the Syntelli Data Science team and propel your career into the industry with a cutting-edge company that focuses on solving analytical problems for businesses of all kinds. Syntelli is committed to fostering an innovative company culture and employee growth is at the core of making the organization a success.

DUTIES & RESPONSIBILITIES



  • This is a resource with technical design and development expertise with experience in the Big Data space.

  • Hands on development experience in some of the technologies including – Hadoop, Spark, HBase, Hive, Pig, R,

  • Solid skills in Java, C++, Python, Scala, Unix script development, modeling NoSql data stores, and a good understanding of working with Spark or Hadoop MapReduce type programming.

  • Experience in areas of optimizing management of and deriving insights from non-structured non-relational data, and provide business value from content through improved information management is key.

  • Experience in analyzing text, streams, documents, social media, big data, speech with emerging Hadoop-based big data, NoSQL, Natural Language Processing, Search and Text analytics technologies and techniques.

  • Apply big data technologies such as Hadoop, Spark or Streams with NoSQL data management and related programming languages for analytics and experimentation with large, multi-structured data sets.


SKILLS & EXPERIENCE



  • Experience with Hadoop and Spark

  • Experience with Linux RedHat

  • Experience with higher level programing languages like Java, Sacala, and Python

  • Knowledge of BigInsights Administration

  • System Integration knowledge essential

  • Agile Development knowledge useful

  • Knowledge in Information Server, Master Data Management (MDM), InfoSphere Streams, Extract Transform Load (ETL) desirable

  • Knowledge of cluster and parallel processing optimization techniques a plus

  • Experience in C++ and Linux (RedHat) in a clustered environment a plus

  • Experience in advanced analytics, statistical modeling (SPSS or R or SAS), and mathematics a plus

  • Experience with Open Source big data technologies like Kafka, storm, Cassandra, HBase, etc.

  • Bachelor’s Degree

  • At least 1 year experience in Hadoop

  • At least 6 months experience in Spark

  • At least 2 years experience in higher level programming languages like Java, Sacala, and Python

  • At least 6 months experience in Linux


Location can be Charlotte, NC or Dallas/Houston, TX.


Expert Data Scientist (Senior)


Syntelli is looking for an exceptional Data Scientist to join the team and deliver consulting sessions, architectural design sessions, and implementations on focused on big data integration. This role will also provide support for Pre-Sales efforts as well as post-sales delivery of major projects.

DUTIES & RESPONSIBILITIES



  • Deliver customer facing consultative sessions to help cultivate Big Data services opportunities in partnership with the Sales team as needed

  • Provide technical oversight and guidance to peers and junior members

  • Provide consultative assistance as needed for the creation of Statements of Work as a client deliverable

  • Support Sales team and attend events and client opportunities to present Big Data solutions

  • Establish self as a subject matter expert on Big Data solutions


SKILLS & EXPERIENCE



  • 5-8 years Business Intelligence Consulting experience with Bachelor’s Degree or Master’s Degree.

  • Exceptional interpersonal, communication and presentation skills at CxO, and CTO levels.

  • 1+ years of experience with Apache Hadoop stack (e.g., MapReduce, Pig, Hive, HBase, Flume)

  • 6 years of experience with related languages (e.g. Java, Linux, Apache, Perl/Python/PHP)

  • Knowledge of NoSQL platforms (Hbase, MongoDB, Cassandra, Accumulo, etc.)

  • 6 years of experience with ETL (Extract-Transform-Load) tools (e.g., Informatica, Ab Initio, etc.)

  • 6 years of experience with BI tools and reporting software (e.g. Microstrategy, Tableau, Cognos, Pentaho)

  • Demonstrated experience in generating and closing identified revenue targets

  • Demonstrated ability to provide technical oversight for large complex projects and achieve desired customer satisfaction from inception to deployment in a consulting environment

  • Advanced analytical, problem solving, negotiation and organizational skills with demonstrated ability to multi-task, organize, prioritize and meet deadline

  • Ability to interface with the client in a pre-sales fashion, and on an on-going basis as projects progress

  • Demonstrated initiative and positive can-do attitude

  • High level of professionalism, integrity and commitment to quality

  • Ability to work independently and as part of a team

  • Demonstrated attentiveness to quality and productivity

  • Strongly prefer candidates that have one or more certifications in Apache Hadoop based platfoms such as Cloudera or HortonWorks


Location can be Charlotte, NC or Dallas/Houston, TX.


 

Login

Register | Lost your password?