Experience Inc. Jobs

Job Information

Rapid Cycle Solutions Data Engineer in McLean, Virginia

Data Engineer McLean, VA · Information Technology Apply Now

Data Engineer

Rapid Cycle Solutions LLC (RCS) is an innovative small business providing IT and management consulting services to the U.S. Federal Government and commercial clients. We have unique strengths in complex, cross-organizational solution analysis, design, development, implementation, and change management supporting enterprise requirements. Our team of professionals has deep consulting backgrounds supporting the unique needs of our clients. Our team members have proven experience leading strategic initiatives within the civilian Government agencies.

RCS is seeking aData Engineerto support with expertise in data engineering, data management, systems engineering, software engineering, and project management for the systems used to collect and process data used to support systems and advance the way those systems are tasked and used to identify and perform operations.

  • Program leverages integrated discrete technologies to support massive data processing, storage, modeling and analytics over several thousand unique data sources, to perform threat identification and analysis. Components comprise the ‘core’ data platform capability – the technologies and systems, data, data processing and modeling, and use of the data via data science and querying of the data corpus and model(s) to derive information. The ‘core’ data platform capability serves as the backbone for other capabilities (e.g., web applications) to accelerate its operations.

    SCOPE:

  • The big data platform integrates COTS, GOTS, open-source, etc. to securely load, process, correlate and enable the querying of disparate and high-volume data collections.

  • The work performed requires the operations, maintenance, and enhancements of this ecosystem. The scope includes data engineering, software engineering, data management, systems engineering, project management of the ecosystem.

  • The work requires the ability to develop a complete and timely understanding of the challenges and requirements, while leveraging the existing and planned future data collection capabilities to directly support requirements.

    This position requires the candidate to work onsite in McLean, VA. Relocation assistance is not available.

    What you will do:

  • Sustain and enhance, as well as modernize with direction, the ecosystem of existing capabilities for ingesting and exploiting high-volumes of disparate datasets at near-real time, correlating with the historical holdings, and enabling Querying via ad hoc and recurring jobs dependent on complex data model(s).

  • Ensure the existing ecosystem of capabilities dependent on “core” data sets, data models, intermediate data models, data ingestion pipelines (Extract, Transform, Load (ETL)) and data governance are sustained at the same (or better) levels.

  • The ecosystem of connected systems, capabilities, analytics, and workflows shall not have downtimes greater than eight (8) hours. Outages during core hours (0700 – 1600 hours) shall have immediate mitigation and resolution. Outages occurring during non-core hours (1600 – 0700 hours) shall require on call support. Outages during non-core hours shall require on call support response within two (2) hours of catastrophic events and immediate triage for non-catastrophic events during core hours.

  • Use project management systems, such as GIT, JIRA, and Confluence, to document and track requirements.

    In making their requirements determination, consider the following:

  • Responsiveness to ad hoc fluid operations requirements to create and deliver new and unique capabilities, data products, intermediate data models, assessments, and findings that can be derived from the data collection capability.

  • Supporting efforts to advance and implement new data analytics, intermediate data models, algorithms, ETL, and methodologies (for data engineering, modeling, querying data).

  • Integrating and transitioning new technologies in coordination, and new capabilities that are accessible to a broader user base across the Mission Partners.

    Required Qualifications/Education:

  • Clearance: Active TS/SCI clearance with Polygraph

  • Developing, running, and analyzing results of complex queries against data stores and data processing environment(s).

  • Manipulating structured and unstructured data to perform analysis and generate reporting and products.

  • Using analytic techniques and tools to perform analytic support.

  • Developing and using data processing technologies such as Python, SPARK, Java, SQL, ECL, Jenkins, PyPi, Terraform, Cloudera, Apache NiFi, Apache Hop, and ElasticSearch to perform data processing.

  • Developing, validating, and using methodologies to support analytic requirements in Clustered Computing environments.

  • System hardware and software troubleshooting.

  • Information Technology architectures (i.e., servers, storage, and virtualization).

  • Data analytics tools SPLUNK and ELK.

  • System and enterprise level health and status monitoring.

planning and coordinating the system testing, integration, deployment, installation, validation, troubleshooting and analysis across networks and systems of collection or processing systems equipment.

  • Monitoring and adjusting collection or processing systems or equipment.

    Nice to Have Qualifications:

  • Enterprise Control Language (ECL) and the Lexis-Nexis High Performance Cluster Computing (HPCC) platform.

  • All-Source data analysis to perform analytic support.

  • Developing custom algorithms to support analytic requirements against massive data stores.

  • Directly supporting performing technical analysis support using massive data processing systems.

  • Writing cables.

  • Planning and coordinating program activities such as installation and upgrading of hardware and software, utilization of cloud services, programming, or systems design development, modification of IT networks, or implementation of Internet and intranet sites.

  • Deploying web applications to a cloud managed environment to include DevOps and security configuration management.

  • Developing, implementing, and maintaining cloud infrastructure services such as EC2, ELB, RDS, S3, and VPC.

  • Planning, coordinating, and executing the required activities to support documentation to meet the data compliance requirements (e.g., legal, data policy)

  • Degree(s): Undergraduate degree in mathematics, computer science, engineering, or similar scientific or technical discipline; Graduate degree in computer science, information systems, engineering, or another scientific or technical discipline; Degree or equivalent in CS, MIS, Economics, Physics, Genetics, or Engineering related field, especially Supercomputing-related.

    RCS is an equal employment opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, or any other characteristic protected by law.

Our company uses E-Verify to confirm the employment eligibility of all newly hired employees. To learn more about E-Verify, including your rights and responsibilities as an applicant, please visit www.dhs.gov/E-Verify

All RCS work locations are drug-free workplaces.

DirectEmployers