Big Data Analytics CareersSyntelli Solutions is a fast growing practice line and is hiring aggressively within the big data analytics and data science space.

In addition to our passion around Performance Management, we are singularly committed to running our business on ‘principles’. We offer value based consulting, and the value of our services do not go down towards year ends! We follow a consultative sales process, respect our consultants, and take great pride in the solutions we design.

Majority of our clients are in telecom, media, retail, energy (oil & gas, utilities), manufacturing, professional services, hospitality and healthcare and have revenues from $100M to $2BN. Some of our clients are Nike, ADP, FedEx, Positec, McKesson and many others.

Syntelli Solutions Inc. is an equal opportunity/affirmative action employer.


In 2017, Syntelli Solutions Inc. was selected as one of the best places to work by Charlotte Business Journal.


Want to take your career to the next level in advanced analytics and data science?

Contact Us


Current Positions in Advanced Analytics & Data Science

Data Science Analyst


Role Description:


We are looking for a Data Scientist to partner with our clients to aid them in better using their data. Our ideal candidate should have a deep understanding of customer behavior analysis, segmentation and predictive modeling, as well as strong communication skills. Candidate must also have experience in some form of SQL (e.g. HIVE, PostgreSQL, MSSQL, etc.) and either Python or R for statistical application development. Experience deploying Data Science Solutions in a Hadoop (e.g. Hortonworks, MapR, Cloudera) and/or Cloud (e.g. AWS, Azure) environment a plus.


Responsibilities:



  • Develop and communicate a deep understanding of client needs, perform analytical deep-dives to identify problems, opportunities and specific actions required

  • Develop reproducible and deployable statistical solutions  on platform such as R/Python/Spark using techniques such as Multi-Level Regression, SVM, and Neural Networks

  • Efficiently access data via multiple vectors (e.g. NFS, FTP, SSH, SQL, Sqoop, Flume, Spark)

  • Design experiments to maximize insights while minimizing error

  • Work with cross-functional teams (including Marketing, Product Management, Engineering, Design, Creative, and senior executives) to rapidly execute and iterate potential solutions


Basic Requirements:



  • 3+ years with relevant Analytics experience

  • 1+ year relevant Data Science experience

  • Proven record of successful statistical product delivery

  • Deep understanding of statistical and data analysis techniques such as propensity modeling, segmentation, media mix modeling, customer 360, etc.

  • Ability to execute marketing science techniques via statistical applications such as R or Python

  • Significant experience with SQL and working with large datasets required

  • Strong verbal and written communication skills

  • BS / MS / PhD in quantitative field a plus

  • Certifications in AWS and/or Azure a plus


Data Science Architect


Role Description:


We are looking for a Big Data Science Architect to lead Data Science product development initiatives for our clients. Our ideal candidate should have a deep understanding of Big Data Science solutions and system architecture, development best practices, data governance protocols, running statistical processes at scale, as well as strong communication skills.


This role is the full lifecycle of data science- leading people, blind and lost in a world without statistics and data, into the brilliant, sparkling truth of high-speed statistics at scale. This Architect, and a team of equally motivated Data Scientists, will craft solutions in environments such as Azure and AWS, using services ranging from basic Azure Data Factory and HD Insights to hand coded Spark Streaming and Node.js.


There are some requirements. Our Architects must have experience designing and deploying “Full-Stack” Data Science Solutions in a Hadoop (e.g. Hortonworks, MapR, Cloudera) and/or Cloud (e.g. AWS, Azure) environment, including: Collaborative Solution Design with multiple stakeholders; prediction/classification via Multi-Level Bayesian MCMC and/or distributed TensorFlow, VectorFlow, MlLib; Data Ingestion; Data Storage; Data Transformations and Modeling; Data Access Patterns; Security; Version Control; and ideally exposing services via Restful API.


Responsibilities:



  • Develop and communicate a deep understanding of client needs, collaborate with client on new approaches for using Data Science help their business

  • Design “Full-Stack” Data Science Solutions/Products including Ingestions, Storage, Prediction, and Access layers within existing client infrastructure (Cloud or on-premise)

  • Provide recommendations on infrastructure improvements to better support Data Science

  • Oversee Development and personally develop reproducible and deployable statistical solutions  in languages such as C# and R/Python (with Spark) using techniques such as Multi-Level Regression, SVM, and Neural Networks

  • Efficiently access data via multiple vectors (e.g. NFS, FTP, SSH, SQL, Sqoop, Flume, Spark)

  • Design experiments to maximize insights while minimizing error

  • Work with cross-functional teams (including Marketing, Product Management, Engineering, Design, Creative, and senior executives) to rapidly execute and iterate potential solutions


Basic Requirements:



  • 5+ years with relevant Data Science/Analytics experience

  • 3+ year relevant Application Development experience

  • Proven record of successful statistical product delivery

  • Deep understanding of statistical and data analysis techniques

  • Significant experience with Azure and/or AWS and working with large datasets required

  • Strong verbal and written communication skills

  • BS / MS / PhD in quantitative/CS field a plus

  • Certifications in AWS and/or Azure a plus


Data Scientist


We are looking for a Data Scientist to partner with our clients to aid them in better using their data. Our ideal candidate should have a deep understanding of customer behavior analysis, segmentation and predictive modeling, as well as strong communication skills. Candidate must also have experience in some form of SQL (e.g. HIVE, PostgreSQL, MSSQL, etc.) and either Python or R for statistical application development. Experience deploying Data Science Solutions in a Hadoop (e.g. Hortonworks, MapR, Cloudera) and/or Cloud (e.g. AWS, Azure) environment a plus. Please email Jason.crane@syntelli.com


Responsibilities:



  • Develop and communicate a deep understanding of client needs, perform analytical deep-dives to identify problems, opportunities and specific actions required

  • Develop reproducible and deployable statistical solutions  on platform such as R/Python/Spark using techniques such as Multi-Level Regression, SVM, and Neural Networks

  • Efficiently access data via multiple vectors (e.g. NFS, FTP, SSH, SQL, Sqoop, Flume, Spark)

  • Design experiments to maximize insights while minimizing error

  • Work with cross-functional teams (including Marketing, Product Management, Engineering, Design, Creative, and senior executives) to rapidly execute and iterate potential solutions


Basic Requirements:



  • 3+ years with relevant Analytics experience

  • 1+ year relevant Data Science experience

  • Proven record of successful statistical product delivery

  • Deep understanding of statistical and data analysis techniques such as propensity modeling, segmentation, media mix modeling, customer 360, etc.

  • Ability to execute marketing science techniques via statistical applications such as R or Python

  • Significant experience with SQL and working with large datasets required

  • Strong verbal and written communication skills

  • BS / MS / PhD in quantitative field a plus

  • Certifications in AWS and/or Azure a plus


Ab Initio/ ETL Developer


We are seeking a talented Ab Initio/ ETL Developer. Are you experience with 837 and want to work for a Great Organization. Please email your resume to Jason.crane@syntelli.com


Requirements:


7 or more years of IT development/programming/coding professional work experience or equivalent combination of transferrable experience and education.


· Bachelor’s degree in an IT related field or equivalent work experience


· Strong skills in various ETL Tools especially in Abinitio and exposure to other ETL Tools is a plus-Talend, Spark and Boomi.


· Experience in various RDBMS DB2, Oracle, SQL Server with strong development skills in SQL, PL/SQL, Procedures


· Experience and understanding with unit testing, release procedures, coding design and documentation protocol as well as change management procedures


· Proficiency using versioning tools


· Thorough knowledge of Information Technology fields and computer systems


· Demonstrated organizational, analytical and interpersonal skills


· Flexible team player


· Ability to manage tasks independently and take ownership of responsibilities


· Ability to learn from mistakes and apply constructive feedback to improve performance


· Must demonstrate initiative and effective independent decision-making skills


· Ability to communicate technical information clearly and articulately


· Ability to adapt to a rapidly changing environment


· In-depth understanding of the systems development life cycle


· Proficiency programming in more than one object oriented programming language


· Proficiency using standard desktop applications such as MS Suite and flowcharting tools such as Visio


· Proficiency using debugging tools


· High critical thinking skills to evaluate alternatives and present solutions that are consistent with business objectives and strategy


· Experience mentoring / or leading other development staff


· Proven leadership abilities including effective knowledge sharing, conflict resolution, facilitation of open discussions, fairness and displaying appropriate levels of assertiveness


· Analytical and detail oriented


Preferred Criteria


· Experience using Agile methodology


· Health care experience


· Hadoop knowledge


· Working experience in 837


Pega Developer


Our organization is growing and looking for additional resources in the Pega Development. If you ware looking for a great organization with outstanding benefits please contact us. Jason.crane@syntelli.com


Description:


Strong Technical and Functional Knowledge:


· Pega System development toolset


· Pega 7.1 Certified


· Experience in implementing, architecting and designing solutions using case management


Other Qualifications:


· 5+ years’ experience in Pega system development/Support, implementing systems solutions to business problems


· Prefer 2+ year’s of hands-on/technical Pega 7.1 programming experience


· Demonstrate project management skills


· Software development experience


· Experience with Case Management, Business Services, JDE, Master Data Management.


· Develop and enforce standards and best practices


· Review and understand the best approach to implement requirements; communicate the same to the project team; Raise conflicts, Complications, and issues in the requirements; provide sizing estimates


· Lead design sessions towards the successful implementation of more complex requirements; define the class structure of data and work objects


· Mentor and oversee work of junior team members


· Perform appropriate tasks required to initiate a project based on the version of PRPC being used (the Application Profiler/ Accelerator or Stage Configuration)


· Oversee the proper configuration of the application security model


· Configure complex integrations and services


· Perform complex system configuration task; configure authentication services; oversee the Rule Set Versioning strategy and execution


· Configure more complex HTML/JSP/Java Script components where necessary


· Health Care business knowledge is a plus


· Requirements: Looking for a Pega 7 Certified developer

Hadoop Developer


Duties & Responsibilities




  • Hadoop development and implementation.

  • Loading from disparate data sets.

  • Pre-processing using Hive and Pig.

  • Designing, building, installing, configuring and supporting Hadoop.

  • Translate complex functional and technical requirements into detailed design.

  • Perform analysis of vast data stores and uncover insights.

  • Maintain security and data privacy.

  • Create scalable and high-performance web services for data tracking.

  • High-speed querying.

  • Managing and deploying HBase.

  • Being a part of a POC effort to help build new Hadoop clusters.

  • Test prototypes and oversee handover to operational teams.

  • Propose best practices/standards.


Qualifications:



  • Minimum 5 Years of Application Development Experience in Java.

  • 5 years designing and developing Enterprise-level data, integration, and reporting/analytic solutions. Proven track record of delivering backend systems that participate in a complex ecosystem

  • Minimum 3 years development experience on Hadoop platform including PIG, Hive, Sqoop, Hbase, Flume, Spark and related tools.

  • Minimum 3 years professional experience designing and developing BI/Bigdata applications.

  • Experience with Hadoop 2.0+ and Yarn applications

  • Proven experience with data modeling, complex data structures, data processing, data quality, and data lifecycle

  • Current knowledge of Unix/Linux scripting, solid experience in code optimization and high performance computing.



Additional Skills:



  • Experience in messaging and collection frameworks like Kafka, Flume, or Storm.

  • 3+ years of distributed database experience (HBase, Accumulo, Cassandra, or equivalent).

  • Knowledge in Big Data related technologies and open source frameworks preferred.

  • 2 – 5 years hands-on experience with the Hadoop stack (e.g. MapReduce, Pig, Hive, Hbase)

  • Experience in integrating heterogeneous applications is required. Experience orchestrating complex data flows is preferable.

  • Deep understanding and ability to use SQL, XML, JSON and UNIX are required.

  • Experience designing and supporting RESTful Web Services is required

  • Demonstrated experience in Java Enterprise ecosystem is required

  • Knowledge in various Open Source tools and technologies in Java Enterprise ecosystem is required

  • Minimum 5 years professional experience designing and developing applications on one operating system (Unix or Windows) or designing complex multi-tiered applications.

  • Minimum of 3 years work experience as a developer is desirable

  • Has experience working with at least 3 business applications/systems and has also provided tier 4 production support.

  • Certification in Hadoop preferred


Location can be Charlotte, NC or Dallas/Houston, TX.

Big Data System Admin


Duties & Responsibilities:




    • Administer, Install and support many databases, including Oracle, DB2, SQL Server, Sybase, Teradata, Netezza, Vertica, cloud databases and MySQL

    • Manage and support several Hadoop physical clusters, running Cloudera, Hortonworks, MapR, Apache, Pivotal, for product and sales engineering support.

    • Perform installation and support for O/S Virtualization software for a variety of software vendors, including: HP, IBM, VMWARE and Oracle.

    • Manage operating system updates, patches, and configuration changes for production servers.

    • Provide very responsive support for day to day requests from product sales, development, support and professional services teams.

    • Responsible for documenting procedures. Performance analysis and debugging of slow running production and development build and regression testing processes.

    • Management of Disk storage for all servers (SAN Administration) Management of ESX clusters Manage database licenses and renewals Understanding of the following: Software design principles, operating system design, database systems and concepts.

    • Networking; Ethernet and TCP/IP, routing, DNS, etc. UNIX and Windows shell scripting. MS Windows, Linux and UNIX operating systems. Virtualization software, especially VMware. Plus: Experience with learning and understanding all the major Hadoop distributions and toolsets running on Hadoop.

    • Understanding and experience with all the major database vendors software: Oracle, DB2, Teradata, SQL Server, MySQL

    • Experience with Virtualization software : VMWare Vsphere, ESXi, Virtualbox, KVM DevOps and DevSecOps, particularly Chef server and Vagrant, Powershell DSC,

    • Docker Server provisioning with Razor or Dell OpenManage AWS or other Cloud management experience





Hadoop/Spark Developer


Syntelli is always on the lookout for exceptional talent. Join the Syntelli Data Science team and propel your career into the industry with a cutting-edge company that focuses on solving analytical problems for businesses of all kinds. Syntelli is committed to fostering an innovative company culture and employee growth is at the core of making the organization a success.

DUTIES & RESPONSIBILITIES







      • This is a resource with technical design and development expertise with experience in the Big Data space.

      • Hands on development experience in some of the technologies including – Hadoop, Spark, HBase, Hive, Pig, R,

      • Solid skills in Java, C++, Python, Scala, Unix script development, modeling NoSql data stores, and a good understanding of working with Spark or Hadoop MapReduce type programming.

      • Experience in areas of optimizing management of and deriving insights from non-structured non-relational data, and provide business value from content through improved information management is key.

      • Experience in analyzing text, streams, documents, social media, big data, speech with emerging Hadoop-based big data, NoSQL, Natural Language Processing, Search and Text analytics technologies and techniques.

      • Apply big data technologies such as Hadoop, Spark or Streams with NoSQL data management and related programming languages for analytics and experimentation with large, multi-structured data sets.






SKILLS & EXPERIENCE







      • Experience with Hadoop and Spark

      • Experience with Linux RedHat

      • Experience with higher level programing languages like Java, Sacala, and Python

      • Knowledge of BigInsights Administration

      • System Integration knowledge essential

      • Agile Development knowledge useful

      • Knowledge in Information Server, Master Data Management (MDM), InfoSphere Streams, Extract Transform Load (ETL) desirable

      • Knowledge of cluster and parallel processing optimization techniques a plus

      • Experience in C++ and Linux (RedHat) in a clustered environment a plus

      • Experience in advanced analytics, statistical modeling (SPSS or R or SAS), and mathematics a plus

      • Experience with Open Source big data technologies like Kafka, storm, Cassandra, HBase, etc.

      • Bachelor’s Degree

      • At least 1 year experience in Hadoop

      • At least 6 months experience in Spark

      • At least 2 years experience in higher level programming languages like Java, Sacala, and Python

      • At least 6 months experience in Linux






Location can be Charlotte, NC or Dallas/Houston, TX.


Expert Data Scientist (Senior)


In this role, we are looking for a Senior Data Scientist to develop business intelligence by querying data repositories using machine learning (classification, regression, clustering), operations research (Linear and Integer Optimizations) and statistical (hypothesis testing & confidence intervals, principal component analysis, etc.)  techniques; devise methods for identifying data patterns and trends; use R language to generate predictive models for predicting risk, building recommendation engines, predictive maintenance, fraud analytics, etc.; prepare data and generate tidy data sets using Python, Spark and Hadoop techniques; tabulate results using data visualization tools such as Tableau and QlikView.


Minimum Requirements



  1. Education and Experience: Bachelors degree in Computer Science, Computer Engineering or Information Technology. Foreign educational equivalent accepted. Five (5) years’ experience as Programmer Analyst, Software Engineer, Data Scientist or related field.

  2. Skills:

    • Languages (R, Python, Spark, Hadoop)

    • Machine Learning (classification, regression, clustering)

    • Operations Research Techniques (Linear and Integer Optimizations)

    • Statistical Techniques (hypothesis testing & confidence intervals, principal component analysis, etc.)

    • Data Visualization (Tableau and QlikView)

    • RDBMS (DB-2, IMS, Oracle)


Big Data Architect


Position Summary


The Big Data Architect is responsible for providing technical leadership, focusing on starting and growing Big Data, analytics, and other programs within our Organization and Clients. Concentration will be on defining Big Data/Hadoop technology strategy, roadmap, and architecting and standing-up Big Data environment. In addition, the qualifying candidate will be expected to dedicate a portion of his/her time on keeping-up and experimenting with innovative technologies in the BI and Analytics space.


Organizational Relationship


This position reports directly to the CTO


Accountabilities


· Plan and establish Hadoop technology standards and usage frameworks within the BI Department.


· Work closely with the Infrastructure team to define the hardware procurement and upgrade roadmap.


· Work in concert with a team of ETL developers to ensure efficient and accurate data transfer within the entire EDW echo system with Big Data Platforms.


· Build and optimize information models, physical data layouts, configuration, optimization and monitoring of RDBMS and Hadoop environments and improve overall processing efficiencies to support the needs of the business.


· Work with business teams and technical analysts to understand business requirements. Determine how to leverage technology to create solutions that satisfy the business requirements.


· Experience in building Business Intelligence platforms in an enterprise environment. Data integration (batch, micro-batches, real-time data streaming) across Hadoop, RDMSs, and Data warehousing.


· Responsible for driving innovations and developing proofs-of-concept and prototypes to help illustrate approaches to technology and business problems.


· All other duties as assigned at management’s discretion.


Characteristics & Attributes


· Strong communication skills – listening, verbal, written and presentation.


· Must demonstrate “out of box” thinking and creative problem solving skills.


· Ability to understand business requirements and building pragmatic/cost effective solutions using Agile project methodologies.


· Attention to detail and accuracy.


· Ability to work effectively across all levels of the organization.


· Ability to handle multiple tasks and function in a team-oriented, fast-paced, matrix environment.


· Excellent grasp of integrating multiple data sources into an enterprise data management platform and can lead data storage solution design.



Education & Experience


· A Bachelors-level degree in computer science, information technology, engineering or related field is required.


· Minimum of 8-10 years enterprise IT application experience, including at least 3 years of architecting strategic, scalable BI, Big Data solutions, and data warehousing.


· Experience of software development methodologies and structured approaches to system development.


· Hands-on experience with related scripting and programming languages (e.g. Java, Scala, Linux, Apache, Perl/Python) and analytics tools (e.g. search and text analytics, SAS, R, BI tools).

Program Manager



We are hiring for a Talented Program Manager. The ideal candidate must have minimum of 12 years’ developing or managing complex IT projects in a fast pace environment in a matrix organization. If you are looking for a great company and have life and P/C Insurance background please send your resume to jason.crane@syntelli.com


Must have 7+ years of project/program/delivery management experience with 3+ years using Agile/Scrum.


Have direct experience working on programs that are focused on Rules based engines, eApplications, underwriting in Life Insurance space.


Knowledge around Information and Data security, cloud and SaaS products.


Demonstrated ability to manage projects from start to finish with technical knowledge


Strong program management experience with proven success in managing vendors, drive cost efficiencies, innovate solutions, deliver frequent successes in a high speed, and result oriented culture.


Solid understanding of key program and project management and communications tools, such as Microsoft Project, Microsoft Word, Microsoft Excel, and Microsoft PowerPoint


Other Qualifications





    • Bachelor’s degree in Computer Science or Math

    • Professional certification like PMP, ITIL, Agile, Scrum etc.

    • Experience working in insurance or life insurance space is a strong plus

    • Demonstrated ability to think strategically about business, product, and technical issues

    • Strong verbal and written communication skills with the ability to work effectively across internal and external organizations

    • Strong negotiation, influencing and problem-solving skills

    • Ability to work under pressure and handle multiple priorities

    • Ability to travel as needed




Data Software Engineer



Syntelli is looking to hire YOU. We are growing due to several key initiatives and looking for a Data Software Engineer. This is a Full Time role with great benefits. If you want to work for an organization with great employee appreciation, values, and opportunity please email me jason.crane@syntelli.com


JOB DESCRIPTION:


An experienced data software engineer who have been working with large-scale and distributed data pipelines. The analytics engineer will be responsible to help create our next-generation analytics platform and the responsibilities span the full engineering lifecycle from architecture and design, data analysis, software development, QA, release and operations support. The engineer will be working as a member of a dedicated DevOps team tasked with building and operating the analytics platform. The data Software Engineer will work closely with (and support) a team of data analysts/scientists.


RESPONSIBILITIES:


Create and support an analytics infrastructure to support high-volume and high-velocity data pipelines.

Troubleshoot and resolve issues in our dev, test and production environments

Develop and test data integration components to high standards of quality and performance

Lead code reviews and act as mentor to less experienced members of the team

Assist with planning and executing releases of data pipeline components into production

Troubleshoot and resolve critical production failures in the data pipeline

Research, identify and recommend technical and operational improvements that may result in improved reliability, efficiency and maintenance of the analytics pipeline

Evaluate and advise on technical aspects of open work requests in the product backlog with the project lead


Analyze massive amounts of data both real-time and batch processing

Prototype ideas for new tools, products and services

Ensure a quality transition to production and solid production operation of applications

Help automate and streamline our operations and processes


BASIC QUALIFICATIONS:


Minimum of Bachelor’s Degree in Comp. Science or related field

At least 5 years solid development experience working on Linux/Unix platform.

5 to 7 years development experience working in Java/Scala

At least 2 years of that experience should be in the analytics sphere and working with distributed compute frameworks.

Strong experience using ETL tools such as Pentaho; Hadoop ETL tech stack – Hive, Sqoop and Oozie

experience with at least 2 live projects in Scala/Spark.

Experience of working in AWS environment

Knowledge of the following technologies: Spark, Storm, Kafka, Kinesis, Avro.

Adaptable, proactive and willing to take ownership

Good communication skills, ability to analyze and clearly articulate complex issues and technologies


Azure Data Lake and HDInsight Architect

Syntelli is growing and looking for expand its leadership team. If you are looking to work for one of the most popular and most admired company to work for, then look no further than Syntelli. We are looking for an Azure Data Lake and HDInsight Architect. If you have experience in leading and designing data architecture please email me your resume


10+ years of experience as a technology leader designing and developing data architecture solutions with more than 2+ years specializing in big data architecture or data analytics.

Collaborate with Customers to understand and identify capability gaps and develop requirements in order to architect, implement, and deliver, and manage high performance information system production resources for machine learning, and data analytics processing.

Design, architect and implement IT solutions for production environments.

Experience in using Azure cloud based services for data warehouse & analytics.

Experience with Azure Data Lake, Azure Data Lake Analytics, Azure Data factory, HDInsights.

Experience with design and implementation of HDInsights with Spark, Kafka clusters.

You must be creative, work effectively in teams, and have excellent written and oral communication skills.

Experience with SQL Server Integration Services, Reporting Services and Analytic Services desired.


 

 

Login

Register | Lost your password?