Syntelli Solutions is a fast growing practice line and is hiring aggressively within the big data analytics and data science space.

In addition to our passion around Performance Management, we are singularly committed to running our business on ‘principles’. We offer value based consulting, and the value of our services do not go down towards year ends! We follow a consultative sales process, respect our consultants, and take great pride in the solutions we design.

Majority of our clients are in telecom, media, retail, energy (oil & gas, utilities), manufacturing, professional services, hospitality and healthcare and have revenues from $100M to $2BN. Some of our clients are Nike, ADP, FedEx, Positec, McKesson and many others.

Syntelli Solutions Inc. is an equal opportunity/affirmative action employer.

In 2017, Syntelli Solutions Inc. was selected as one of the best places to work by Charlotte Business Journal.

Want to take your career to the next level in advanced analytics and data science?

Current Positions in Advanced Analytics & Data Science

Business Analyst (Raleigh, NC and Columbus, OH)

Business Analysts can turn data into information, and information into insight. They play a crucial role in our engagements and should be focused on helping businesses implement technology solutions that move the needle. Business Analysts are future managers and need to show signs of leadership during their engagements – whether guiding the client or helping to guide their team. Analysis, Communication, and Management are the three pillars that we look for in this role.

Key Responsibilities

  • Interpret and analyze data using modern statistical techniques
  • Evaluate business processes, anticipate requirements, and identify opportunities for improvement
  • Translate and map the technical outputs to business end user requirements
  • Conduct meetings and present to both internal and external stakeholders
  • Document and communicate processes and results
  • Work closely with clients, management, and data sciences/technical team
  • Perform UA testing
  • Help manage projects and monitor team performance


  • Outstanding analytical and problem-solving skills
  • Minimum of 5 years of experience in business analysis or related field
  • Advanced technical skills and understanding of business practices
  • Strong communication skills, able to regularly deal with both business and technical stakeholders
  • Proven track record of leading and supporting successful projects
  • Ability to influence stakeholders and present technical findings
  • Exceptional communication, planning, and organizational skills

Sample Technical Profile

Methodologies Waterfall, Rational Unified Process (RUP), Agile (SCRUM)
Data Analysis GAP Analysis, Impact Analysis, Feasibility Study
Operating Systems Window 2000/XP/Vista/07/08, Mac
Requirement Management Tool Rational Requisite Pro, SmartSheet
Business Modeling MS Visio, Lucid Charts
Office Tools MS Word, MS Excel, MS Power Point, MS Access
Databases MS Access, MS Excel, COGNOS, SSRS, Toad Oracle, SQL, My SQL, Sales force
Project Management Tool MS Project, MS Access, SharePoint, Version One.


  • BS in business, Economics, Computer Science, Information Management, Statistics, or similar
  • MBA or Masters in related field preferred, required for Associate level

Data Science Architect

We are looking for a Big Data Science Architect to lead Data Science product development initiatives for our clients. Our ideal candidate should have a deep understanding of Big Data Science solutions and system architecture, development best practices, data governance protocols, running statistical processes at scale, as well as strong communication skills.

This role is the full lifecycle of data science- leading people, blind and lost in a world without statistics and data, into the brilliant, sparkling truth of high-speed statistics at scale. This Architect, and a team of equally motivated Data Scientists, will craft solutions in environments such as Azure and AWS, using services ranging from basic Azure Data Factory and HD Insights to hand coded Spark Streaming and Node.js.

There are some requirements. Our Architects must have experience designing and deploying “Full-Stack” Data Science Solutions in a Hadoop (e.g. Hortonworks, MapR, Cloudera) and/or Cloud (e.g. AWS, Azure) environment, including: Collaborative Solution Design with multiple stakeholders; prediction/classification via Multi-Level Bayesian MCMC and/or distributed TensorFlow, VectorFlow, MlLib; Data Ingestion; Data Storage; Data Transformations and Modeling; Data Access Patterns; Security; Version Control; and ideally exposing services via Restful API.


  • Develop and communicate a deep understanding of client needs, collaborate with client on new approaches for using Data Science help their business
  • Design “Full-Stack” Data Science Solutions/Products including Ingestions, Storage, Prediction, and Access layers within existing client infrastructure (Cloud or on-premise)
  • Provide recommendations on infrastructure improvements to better support Data Science
  • Oversee Development and personally develop reproducible and deployable statistical solutions in languages such as C# and R/Python (with Spark) using techniques such as Multi-Level Regression, SVM, and Neural Networks
  • Efficiently access data via multiple vectors (e.g. NFS, FTP, SSH, SQL, Sqoop, Flume, Spark)
  • Design experiments to maximize insights while minimizing error
  • Work with cross-functional teams (including Marketing, Product Management, Engineering, Design, Creative, and senior executives) to rapidly execute and iterate potential solutions

Basic Requirements:

  • 5+ years with relevant Data Science/Analytics experience
  • 3+ year relevant Application Development experience
  • Proven record of successful statistical product delivery
  • Deep understanding of statistical and data analysis techniques
  • Significant experience with Azure and/or AWS and working with large datasets required
  • Strong verbal and written communication skills
  • BS / MS / PhD in quantitative/CS field a plus
  • Certifications in AWS and/or Azure a plus

Data Scientist
We are looking for a Data Scientist to partner with our clients to aid them in better using their data. Our ideal candidate should have a deep understanding of customer behavior analysis, segmentation and predictive modeling, as well as strong communication skills. Candidate must also have experience in some form of SQL (e.g. HIVE, PostgreSQL, MSSQL, etc.) and either Python or R for statistical application development. Experience deploying Data Science Solutions in a Hadoop (e.g. Hortonworks, MapR, Cloudera) and/or Cloud (e.g. AWS, Azure) environment a plus.


  • Develop and communicate a deep understanding of client needs, perform analytical deep-dives to identify problems, opportunities and specific actions required
  • Develop reproducible and deployable statistical solutions on platform such as R/Python/Spark using techniques such as Multi-Level Regression, SVM, and Neural Networks
  • Efficiently access data via multiple vectors (e.g. NFS, FTP, SSH, SQL, Sqoop, Flume, Spark)
  • Design experiments to maximize insights while minimizing error
  • Work with cross-functional teams (including Marketing, Product Management, Engineering, Design, Creative, and senior executives) to rapidly execute and iterate potential solutions

Basic Requirements:

  • 3+ years with relevant Analytics experience
  • 1+ year relevant Data Science experience
  • Proven record of successful statistical product delivery
  • Deep understanding of statistical and data analysis techniques such as propensity modeling, segmentation, media mix modeling, customer 360, etc.
  • Ability to execute marketing science techniques via statistical applications such as R or Python
  • Significant experience with SQL and working with large datasets required
  • Strong verbal and written communication skills
  • BS / MS / PhD in quantitative field a plus
  • Certifications in AWS and/or Azure a plus

Account Manager / Business Development Manager (Charlotte, NC and Raleigh, NC)
Job Duties

  • Locates or proposes business deals by contacting potential partners; discovering and exploring opportunities.
  • Screens potential business deals by analyzing market strategies, deal requirements, potential, and financials; evaluating options; resolving internal priorities; recommending equity investments.
  • Develops negotiating strategies and positions by studying integration of new venture with company strategies and operations; examining risks and potentials; estimating partners’ needs and goals.
  • Closes new business deals by coordinating requirements; developing and negotiating contracts; integrating contract requirements with business operations.
  • Protects organization’s value by keeping information confidential.
  • Updates job knowledge by participating in educational opportunities; reading professional publications; maintaining personal networks; participating in professional organizations.
  • Enhances organization reputation by accepting ownership for accomplishing new and different requests; exploring opportunities to add value to job accomplishments.

Requirements and Qualifications

  • Bachelor’s degree in business or related field; or equivalent experience required
  • Minimum of four years related experience in IT sales, management, or relevant role in industry. Data Science / Data Analytics experience is preferred.
  • Superior computer skills; proficient in Microsoft Office Suite; knowledge of MS Dynamics or other programs a plus
  • Excellent written and verbal communication skills
  • Pays strict attention to detail
  • Able to work in a fast-paced environment and work independently
  • Persuasive and unafraid to negotiate; has good business acumen
  • Up-to-date on latest industry trends; able to articulate trends and potential clearly and confidently
  • Possesses excellent interpersonal and customer service skills
  • Able to multi-task while efficiently managing priorities
  • Role requires travel <30% to support clients regionally.

ETL Developer (Raleigh, NC)

An ETL Developer, due to the nature of our teams and engagements, must also be a capable leader. Often working with an architect, they must bring technical leadership and expertise to the project. Specifically, experience with Application Software, Maintenance, Testing, Optimization, Engineering, and Process Improvement projects. They should exhibit a clear and thorough understanding of business processes and workflows and be comfortable working in both waterfall and agile methodologies. Candidates must have advanced knowledge of SQL Server, SSIS Catalog, SSIS, T-SQL, and some data modeling experience.

Key Responsibilities

  • Design, build, and deploy effective SSIS packages
  • Implement stored procedures and effectively query a database
  • Translate requirements from the business and analyst into technical code
  • Identify and test for bugs and bottlenecks in the ETL solutions
  • Ensure the best possible performance and quality in the packages
  • Provide support and address issues in the packages
  • Work with architects to design optimal ETL pipeline


  • Outstanding analytical and problem-solving skills
  • Strong grasp of the technical aspects of business intelligence
  • Strong communication skills, able to regularly deal with both business and technical stakeholders
  • Proven track record of writing advanced SQL, including query tuning
  • Experience identifying data quality
  • Database design experience a plus
  • Experience designing and building a complete ETL/SSIS process moving and transforming data for ODS, Staging, and Data Warehousing
  • Knowledge of C#/VB.NET a plus
  • Ability to utilize ad-hoc techniques to perform on-the-fly data analysis
  • Proficiency in deploying and debugging a production SSIS environment

Sample Technical Profile

ETL Tools Ab-Initio V2.15, Ab Initio V 3.0.x, Ab initio 3.1.x, Datastage Designer v9.1, Informatica
Data Base Oracle, Microsoft SQL Server, IBM Netezza, IBM DB2, Teradata v14
Operating System UNIX, AIX, DOS, Windows 7, Windows XP
Languages SQL, PL/SQL, Unix shell scripting
Scheduling Tools Tivoli workload scheduler, AutoSys and Control-M
Version Control Tool EME, CA7 Harvest
Domain Knowledge Banking, Insurance and Logistics
Defect Tracking Tool HP-Mercury Quality Center
Application Tools MS Word, MS Excel, WinScp, Notepad ++, EditPlus, TextPad
Data Modeling Star Schema and Snowflake schema, Erwin tool.


  • BS in Mathematics, Economics, Computer Science, Information Management, Statistics, or similar
  • Masters in related field preferred

Hadoop Developer
Duties & Responsibilities

  • Hadoop development and implementation.
  • Loading from disparate data sets.
  • Pre-processing using Hive and Pig.
  • Designing, building, installing, configuring and supporting Hadoop.
  • Translate complex functional and technical requirements into detailed design.
  • Perform analysis of vast data stores and uncover insights.
  • Maintain security and data privacy.
  • Create scalable and high-performance web services for data tracking.
  • High-speed querying.
  • Managing and deploying HBase.
  • Being a part of a POC effort to help build new Hadoop clusters.
  • Test prototypes and oversee handover to operational teams.
  • Propose best practices/standards.


  • Minimum 5 Years of Application Development Experience in Java.
  • 5 years designing and developing Enterprise-level data, integration, and reporting/analytic solutions. Proven track record of delivering backend systems that participate in a complex ecosystem
  • Minimum 3 years development experience on Hadoop platform including PIG, Hive, Sqoop, Hbase, Flume, Spark and related tools.
  • Minimum 3 years professional experience designing and developing BI/Bigdata applications.
  • Experience with Hadoop 2.0+ and Yarn applications
  • Proven experience with data modeling, complex data structures, data processing, data quality, and data lifecycle
  • Current knowledge of Unix/Linux scripting, solid experience in code optimization and high performance computing.

Additional Skills:

  • Experience in messaging and collection frameworks like Kafka, Flume, or Storm.
  • 3+ years of distributed database experience (HBase, Accumulo, Cassandra, or equivalent).
  • Knowledge in Big Data related technologies and open source frameworks preferred.
  • 2 – 5 years hands-on experience with the Hadoop stack (e.g. MapReduce, Pig, Hive, Hbase)
  • Experience in integrating heterogeneous applications is required. Experience orchestrating complex data flows is preferable.
  • Deep understanding and ability to use SQL, XML, JSON and UNIX are required.
  • Experience designing and supporting RESTful Web Services is required
  • Demonstrated experience in Java Enterprise ecosystem is required
  • Knowledge in various Open Source tools and technologies in Java Enterprise ecosystem is required
  • Minimum 5 years professional experience designing and developing applications on one operating system (Unix or Windows) or designing complex multi-tiered applications.
  • Minimum of 3 years work experience as a developer is desirable
  • Has experience working with at least 3 business applications/systems and has also provided tier 4 production support.
  • Certification in Hadoop preferred

Location can be Charlotte, NC or Dallas/Houston, TX.

Big Data System Admin
Duties & Responsibilities:

  • Administer, Install and support many databases, including Oracle, DB2, SQL Server, Sybase, Teradata, Netezza, Vertica, cloud databases and MySQL
  • Manage and support several Hadoop physical clusters, running Cloudera, Hortonworks, MapR, Apache, Pivotal, for product and sales engineering support.
  • Perform installation and support for O/S Virtualization software for a variety of software vendors, including: HP, IBM, VMWARE and Oracle.
  • Manage operating system updates, patches, and configuration changes for production servers.
  • Provide very responsive support for day to day requests from product sales, development, support and professional services teams.
  • Responsible for documenting procedures. Performance analysis and debugging of slow running production and development build and regression testing processes.
  • Management of Disk storage for all servers (SAN Administration) Management of ESX clusters Manage database licenses and renewals Understanding of the following: Software design principles, operating system design, database systems and concepts.
  • Networking; Ethernet and TCP/IP, routing, DNS, etc. UNIX and Windows shell scripting. MS Windows, Linux and UNIX operating systems. Virtualization software, especially VMware. Plus: Experience with learning and understanding all the major Hadoop distributions and toolsets running on Hadoop.
  • Understanding and experience with all the major database vendors software: Oracle, DB2, Teradata, SQL Server, MySQL
  • Experience with Virtualization software : VMWare Vsphere, ESXi, Virtualbox, KVM DevOps and DevSecOps, particularly Chef server and Vagrant, Powershell DSC,
  • Docker Server provisioning with Razor or Dell OpenManage AWS or other Cloud management experience

Hadoop/Spark Developer
Syntelli is always on the lookout for exceptional talent. Join the Syntelli Data Science team and propel your career into the industry with a cutting-edge company that focuses on solving analytical problems for businesses of all kinds. Syntelli is committed to fostering an innovative company culture and employee growth is at the core of making the organization a success.


  • This is a resource with technical design and development expertise with experience in the Big Data space.
  • Hands on development experience in some of the technologies including – Hadoop, Spark, HBase, Hive, Pig, R,
  • Solid skills in Java, C++, Python, Scala, Unix script development, modeling NoSql data stores, and a good understanding of working with Spark or Hadoop MapReduce type programming.
  • Experience in areas of optimizing management of and deriving insights from non-structured non-relational data, and provide business value from content through improved information management is key.
  • Experience in analyzing text, streams, documents, social media, big data, speech with emerging Hadoop-based big data, NoSQL, Natural Language Processing, Search and Text analytics technologies and techniques.
  • Apply big data technologies such as Hadoop, Spark or Streams with NoSQL data management and related programming languages for analytics and experimentation with large, multi-structured data sets.


  • Experience with Hadoop and Spark
  • Experience with Linux RedHat
  • Experience with higher level programing languages like Java, Sacala, and Python
  • Knowledge of BigInsights Administration
  • System Integration knowledge essential
  • Agile Development knowledge useful
  • Knowledge in Information Server, Master Data Management (MDM), InfoSphere Streams, Extract Transform Load (ETL) desirable
  • Knowledge of cluster and parallel processing optimization techniques a plus
  • Experience in C++ and Linux (RedHat) in a clustered environment a plus
  • Experience in advanced analytics, statistical modeling (SPSS or R or SAS), and mathematics a plus
  • Experience with Open Source big data technologies like Kafka, storm, Cassandra, HBase, etc.
  • Bachelor’s Degree
  • At least 1 year experience in Hadoop
  • At least 6 months experience in Spark
  • At least 2 years experience in higher level programming languages like Java, Sacala, and Python
  • At least 6 months experience in Linux

Location can be Charlotte, NC or Dallas/Houston, TX.

Sr. Data Scientist
In this role, we are looking for a Senior Data Scientist to develop business intelligence by querying data repositories using machine learning (classification, regression, clustering), operations research (Linear and Integer Optimizations) and statistical (hypothesis testing & confidence intervals, principal component analysis, etc.) techniques; devise methods for identifying data patterns and trends; use R language to generate predictive models for predicting risk, building recommendation engines, predictive maintenance, fraud analytics, etc.; prepare data and generate tidy data sets using Python, Spark and Hadoop techniques; tabulate results using data visualization tools such as Tableau and QlikView.

Minimum Requirements

Education and Experience: Bachelors degree in Computer Science, Computer Engineering or Information Technology. Foreign educational equivalent accepted. Five (5) years’ experience as Programmer Analyst, Software Engineer, Data Scientist or related field.


  • Languages (R, Python, Spark, Hadoop)
  • Machine Learning (classification, regression, clustering)
  • Operations Research Techniques (Linear and Integer Optimizations)
  • Statistical Techniques (hypothesis testing & confidence intervals, principal component analysis, etc.)
  • Data Visualization (Tableau and QlikView)
  • RDBMS (DB-2, IMS, Oracle)

Big Data Architect
Position Summary

The Big Data Architect is responsible for providing technical leadership, focusing on starting and growing Big Data, analytics, and other programs within our Organization and Clients. Concentration will be on defining Big Data/Hadoop technology strategy, roadmap, and architecting and standing-up Big Data environment. In addition, the qualifying candidate will be expected to dedicate a portion of his/her time on keeping-up and experimenting with innovative technologies in the BI and Analytics space.

Organizational Relationship

This position reports directly to the CTO


  • Plan and establish Hadoop technology standards and usage frameworks within the BI Department.
  • Work closely with the Infrastructure team to define the hardware procurement and upgrade roadmap.
  • Work in concert with a team of ETL developers to ensure efficient and accurate data transfer within the entire EDW echo system with Big Data Platforms.
  • Build and optimize information models, physical data layouts, configuration, optimization and monitoring of RDBMS and Hadoop environments and improve overall processing efficiencies to support the needs of the business.
  • Work with business teams and technical analysts to understand business requirements. Determine how to leverage technology to create solutions that satisfy the business requirements.
  • Experience in building Business Intelligence platforms in an enterprise environment. Data integration (batch, micro-batches, real-time data streaming) across Hadoop, RDMSs, and Data warehousing.
  • Responsible for driving innovations and developing proofs-of-concept and prototypes to help illustrate approaches to technology and business problems.
  • All other duties as assigned at management’s discretion.

Characteristics & Attributes

  • Strong communication skills – listening, verbal, written and presentation.
  • Must demonstrate “out of box” thinking and creative problem solving skills.
  • Ability to understand business requirements and building pragmatic/cost effective solutions using Agile project methodologies.
  • Attention to detail and accuracy.
  • Ability to work effectively across all levels of the organization.
  • Ability to handle multiple tasks and function in a team-oriented, fast-paced, matrix environment.
  • Excellent grasp of integrating multiple data sources into an enterprise data management platform and can lead data storage solution design.

Education & Experience

  • A Bachelors-level degree in computer science, information technology, engineering or related field is required.
  • Minimum of 8-10 years enterprise IT application experience, including at least 3 years of architecting strategic, scalable BI, Big Data solutions, and data warehousing.
  • Experience of software development methodologies and structured approaches to system development.
  • Hands-on experience with related scripting and programming languages (e.g. Java, Scala, Linux, Apache, Perl/Python) and analytics tools (e.g. search and text analytics, SAS, R, BI tools).

Program Manager
We are hiring for a Talented Program Manager. The ideal candidate must have minimum of 12 years’ developing or managing complex IT projects in a fast pace environment in a matrix organization.

Must have 7+ years of project/program/delivery management experience with 3+ years using Agile/Scrum.

Have direct experience working on programs that are focused on Rules based engines, eApplications, underwriting in Life Insurance space.

Knowledge around Information and Data security, cloud and SaaS products.

Demonstrated ability to manage projects from start to finish with technical knowledge

Strong program management experience with proven success in managing vendors, drive cost efficiencies, innovate solutions, deliver frequent successes in a high speed, and result oriented culture.

Solid understanding of key program and project management and communications tools, such as Microsoft Project, Microsoft Word, Microsoft Excel, and Microsoft PowerPoint

Other Qualifications

  • Bachelor’s degree in Computer Science or Math
  • Professional certification like PMP, ITIL, Agile, Scrum etc.
  • Experience working in insurance or life insurance space is a strong plus
  • Demonstrated ability to think strategically about business, product, and technical issues
  • Strong verbal and written communication skills with the ability to work effectively across internal and external organizations
  • Strong negotiation, influencing and problem-solving skills
  • Ability to work under pressure and handle multiple priorities
  • Ability to travel as needed

Data Software Engineer
Syntelli is looking to hire YOU. We are growing due to several key initiatives and looking for a Data Software Engineer. This is a Full Time role with great benefits.


An experienced data software engineer who have been working with large-scale and distributed data pipelines. The analytics engineer will be responsible to help create our next-generation analytics platform and the responsibilities span the full engineering lifecycle from architecture and design, data analysis, software development, QA, release and operations support. The engineer will be working as a member of a dedicated DevOps team tasked with building and operating the analytics platform. The data Software Engineer will work closely with (and support) a team of data analysts/scientists.


  • Create and support an analytics infrastructure to support high-volume and high-velocity data pipelines.
  • Troubleshoot and resolve issues in our dev, test and production environments
  • Develop and test data integration components to high standards of quality and performance
  • Lead code reviews and act as mentor to less experienced members of the team
  • Assist with planning and executing releases of data pipeline components into production
  • Troubleshoot and resolve critical production failures in the data pipeline
  • Research, identify and recommend technical and operational improvements that may result in improved reliability, efficiency and maintenance of the analytics pipeline
  • Evaluate and advise on technical aspects of open work requests in the product backlog with the project lead
  • Analyze massive amounts of data both real-time and batch processing
  • Prototype ideas for new tools, products and services
  • Ensure a quality transition to production and solid production operation of applications
  • Help automate and streamline our operations and processes


  • Minimum of Bachelor’s Degree in Comp. Science or related field
  • At least 5 years solid development experience working on Linux/Unix platform.
  • 5 to 7 years development experience working in Java/Scala
  • At least 2 years of that experience should be in the analytics sphere and working with distributed compute frameworks.
  • Strong experience using ETL tools such as Pentaho; Hadoop ETL tech stack – Hive, Sqoop and Oozie
  • experience with at least 2 live projects in Scala/Spark.
  • Experience of working in AWS environment
  • Knowledge of the following technologies: Spark, Storm, Kafka, Kinesis, Avro.
  • Adaptable, proactive and willing to take ownership
  • Good communication skills, ability to analyze and clearly articulate complex issues and technologies

Talend ETL Developer (White Plains, NY area)

  • Work with stakeholders to develop business rules and business rule exceptions
  • Translate requirements from the business and analysts into technical designs and code
  • Design, develop, validate and deploy the Talend ETL processes using Talend data integration and data quality tools
  • Implement stored procedures and effectively query a database
  • Identify and test for bugs and bottlenecks in the ETL solution
  • Ensure the best possible performance and quality on the solutions
  • Provide support and fix issues

Required Skills and Experience:

  • 5+ yrs experience developing ETL logic
  • 2+ yrs experience using Talend tool to load, unload data from Hadoop (HDFS, Hive, Sqoop).
  • ETL logic using Java
  • Excellent RDBMS (DB2, Oracle, SQL Server) knowledge for development using SQL
  • Basic UNIX OS and Shell Scripting skills
  • Experience working in DevOps environment with automated builds and automated build validation
  • High level collaboration with other devs and architects
  • Strong initiative with the ability to identify areas of improvement with little direction
  • Team-player excited to work in a fast-paced environment
  • Position is located in the White Plains, NY area

Solutions Architect
Duties: Architect Hadoop, Hibernate, Spring and Struts – data warehousing based solutions for storing, accessing, transforming and analyzing high-volume, real-time data using Java, Spark Core, Spark Sql and Python programming languages, JSP web technologies and Horton Works HDP on Apache Pig platform in an integrated development environment (Eclipse); convert SQL queries into Spark transformations using Spark RDD, Python and Scala; implement dynamic partitions and bucketing using Hive; use HDFS and Spark for high performance access to diverse data sources (Hadoop Clusters, HBase, S3, etc.); use Sqoop for exchange of data between non-relational (HBase) and relational databases (Oracle, DB2, SQL Server); schedule Hadoop jobs using Apache Oozie; automate application build using Ant and Maven; maintain configuration information; use Spark Sql to process structured data and generate interactive dashboards; and collect, aggregate and process live data streams using Flume, and Spark Streaming.

Education and Experience: Bachelor’s degree in Computer Science, Computer Engineering or Information Technology. Foreign educational equivalent accepted (five [5] years of post-secondary education from a foreign university as equivalent to a four [4] year baccalaureate degree from an accredited university in the United States). Plus five (5) years’ experience as Software Architect, Software Developer, Systems/Data Analyst, Programmer or related position.


  • Frameworks (Hadoop, Spring, Hibernate, Struts)
  • Programming Languages (Java, Spark Core, Spark Sql, Python, Scala, Spark RDD)
  • Web Technologies (JSP)
  • Horton Works HDP
  • Apache Pig
  • IDE (Eclipse)
  • Hive
  • HDFS
  • Sqoop
  • HBase
  • Oracle, DB2, SQL Server
  • Apache Oozie
  • Ant, Maven
  • Spark Sql
  • Flume, Spark Streaming

GC Worksite: Exact worksite not known at this point in time but will be paid from and controlled and supervised from HQ office in Charlotte, NC (13925 Ballantyne Corporate Place, Suite 260, Charlotte, NC 28277).. No travel and/or telecommuting.

Want to join our team?