Cincinnati Bell Jobs

Mobile cincinnati-bell Logo

Job Information

Cincinnati Bell Data Engineer II in Remote, United States

Job Purpose:

The primary role of the Data Engineer II is to support the extraction & transformation of data objects between systems across the enterprise. The role will focus on creating, delivering and maintaining ETL & Data packages across MS systems that support needs for the entire organization.

Essential Functions:

  • Employ a variety of programming languages and tools (i.e. Python, C++, IDE's, etc.) to integrate systems/data packages and infrastructure into AWS Data Lake, SQL/MySQL DBs and Data Stores - 20%
  • Design, Build & Maintain AWS Data Lake environment for CBTS data infrastructure -- 20%
  • Building APIs, Databases and data sets for internal/external for reporting and system consumption - 15%
  • Lead Data Engineering team, provide coaching, establish data engineering best practices such as code review, optimization, ETL tuning, agile methodology and version control -- 15%
  • Develop and maintain front end user interfaces for data input/output, optimized for user experience - 15%
  • Collaborate with Analytics & Insights team members on agile approaches to achieving project goals - 15%

Education:

Four years of College resulting in a Bachelor's Degree or equivalent

Certifications, Accreditations, Licenses:

  • AWS Data Analytics or Azure Data Engineer -- Required
  • AWS Certified Solutions Architect Professional - Preferred
  • AWS Certified Solutions Architect Associate - Preferred
  • MCSE: Cloud Platform and Architecture - Preferred
  • MCSA: SQL 2016 - Preferred
  • MySQL Database Administration - Preferred
  • MCSA: SQL 2016 - Preferred

General Work Experience:

3 to 5 years

Previous Work Experience:

  • 3+ plus years in Corporate Data Engineering/Data Lake Development
  • 2+ years in Cloud Development (S3, RDS, Glue, Firehose, Athena, Crawler, Lambda functions, Cloudwatch)
  • 3+ years in database development (RDS, SQL, NoSQL, MSSQL)

Special Knowledge, Skills and Abilities:

Required

  • Demonstrated proficiency working with APIs, SDKs and Relational Databases (including MS SQL Server) -- Required
  • Demonstrated strong proficiency in cloud data lake environments (AWS, Azure) -- Required
  • Strong working knowledge with of at least one of the following tech stacks.

  • (S3, RDS, Glue, Athena, Lambda functions, Step Functions, CloudWatch) - Required

  • (Azure, Blob Storage/Data Lake Storage, Data factory, Event Hub, Data Lake Analytics, Azure Functions and Azure Monitoring). - Required
  • Visual Studio or comparable IDE proficiency -- Required
  • Demonstrated proficiency working with SQL view, procedures, and stored functions - Required
  • Experience with ETL Tools required (e.g. Talend, Pentaho, SSIS) - Required
  • Strong understanding of APIs, programmatic data extraction, and scraping techniques -- Required
  • Strong working knowledge of GIT -- Required

Preferred

  • 2-4+ years of MySQL -- Preferred
  • 2-4+ years working knowledge of Python, C++, Node.js, JavaScript, - Preferred
  • 2-4+ years working knowledge Strong understanding of Relational Data Structures - Preferred
  • 2+ years of working knowledge of non-relational data structures (e.g. Mongo, Cassandra, DynamoDB). - Preferred
  • Familiarity with Agile methodologies -- Preferred
  • Hands on experience in installing configuring and using Hadoop ecosystem components like: Hadoop, MapReduce, HBase, Hive, Sqoop, Pig, Zookeeper, and Flume. - Preferred
  • Familiarity with scientific & statistical methodologies -- Preferred

Supervisory Responsibility:

No supervisory responsibility

Work Environment:

Fully Remote

Equal Opportunity Employer Minorities/Women/Protected Veterans/Disabled

DirectEmployers